From zkarl at live.com Tue Sep 2 06:10:37 2008 From: zkarl at live.com (Karl Zeilinger) Date: Tue, 2 Sep 2008 13:10:37 +0200 Subject: how to specify hosts to mpiexec? -machinefile doesn't work Message-ID: Hello, I have installed PETSc with the following commands from the installation documentation on the website, downloading a local mpich installation: >export PETSC_DIR=$PWD >./config/configure.py --with-cc=gcc --with-fc=g77 --download-f-blas-lapack=1 --download-mpich=1 >make all test A first test with a modified ex2.c from the tutorials ("mpiexec -np 4 ./ex2") was successful, but showed that all processes run on the local host, as no machine file is given. Trying to run: >mpiexec -np 4 -machinefile /opt/mpich/share/machines.LINUX ./ex2 produces this error: >invalid mpiexec argument -machinefile >Usage: mpiexec -usize -maxtime -exitinfo -l\ > -n -soft -host \ > ... How can I specify the hosts which the mpich (downloaded during the PETSc installation) is supposed to use? Thanks Karl _________________________________________________________________ Explore the seven wonders of the world http://search.msn.com/results.aspx?q=7+wonders+world&mkt=en-US&form=QBRE From mossaiby at yahoo.com Tue Sep 2 06:39:40 2008 From: mossaiby at yahoo.com (Farshid Mossaiby) Date: Tue, 2 Sep 2008 04:39:40 -0700 (PDT) Subject: how to specify hosts to mpiexec? -machinefile doesn't work In-Reply-To: Message-ID: <395581.4929.qm@web52210.mail.re2.yahoo.com> Hi, I think you should add --download-mpich-pm=mpd and maybe also --download-mpich-device=. Try running ./config/configure.py --help for more description. Hope this helps, Best regards, Farshid Mossaiby --- On Tue, 9/2/08, Karl Zeilinger wrote: > From: Karl Zeilinger > Subject: how to specify hosts to mpiexec? -machinefile doesn't work > To: petsc-users at mcs.anl.gov > Date: Tuesday, September 2, 2008, 3:40 PM > Hello, > I have installed PETSc with the following commands from the > installation documentation on the website, downloading a > local mpich installation: > >export PETSC_DIR=$PWD > >./config/configure.py --with-cc=gcc --with-fc=g77 > --download-f-blas-lapack=1 --download-mpich=1 > >make all test > > A first test with a modified ex2.c from the tutorials > ("mpiexec -np 4 ./ex2") was successful, but > showed that all processes run on the local host, as no > machine file is given. > > Trying to run: > >mpiexec -np 4 -machinefile > /opt/mpich/share/machines.LINUX ./ex2 > produces this error: > >invalid mpiexec argument -machinefile > >Usage: mpiexec -usize -maxtime -exitinfo -l\ > > -n -soft -host \ > > ... > > How can I specify the hosts which the mpich (downloaded > during the PETSc installation) is supposed to use? > > Thanks > Karl > > _________________________________________________________________ > Explore the seven wonders of the world > http://search.msn.com/results.aspx?q=7+wonders+world&mkt=en-US&form=QBRE From balay at mcs.anl.gov Tue Sep 2 10:01:56 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 2 Sep 2008 10:01:56 -0500 (CDT) Subject: how to specify hosts to mpiexec? -machinefile doesn't work In-Reply-To: <395581.4929.qm@web52210.mail.re2.yahoo.com> References: <395581.4929.qm@web52210.mail.re2.yahoo.com> Message-ID: Yes - to be able to use multiple hosts - you should use --download-mpich-device=mpd option. The defualt --download-mpich-device=pm gets you started off with minimal MPICH config [but limits running all MPI jobs on the same machine. Wrt configuring MPD - usually you would do: cat > machinefile host1 host2 ctrl-D mpdboot -n 2 -f machinefile mpiexec -n 2 binary mpdallexit There might be other issues specific to your cluster. For this, its best to use MPI already installed [with the correct compilers] and configured on this cluster, and use --with-mpi-dir option with petsc [instead of --download-mpich option]. The other alternative is to read up on MPICH2 installation instructions on how to configure MPICH2 and MPD on your specific hardware setup. Satish On Tue, 2 Sep 2008, Farshid Mossaiby wrote: > Hi, > > I think you should add --download-mpich-pm=mpd and maybe also --download-mpich-device=. Try running ./config/configure.py --help for more description. > > Hope this helps, > > Best regards, > Farshid Mossaiby > > > --- On Tue, 9/2/08, Karl Zeilinger wrote: > > > From: Karl Zeilinger > > Subject: how to specify hosts to mpiexec? -machinefile doesn't work > > To: petsc-users at mcs.anl.gov > > Date: Tuesday, September 2, 2008, 3:40 PM > > Hello, > > I have installed PETSc with the following commands from the > > installation documentation on the website, downloading a > > local mpich installation: > > >export PETSC_DIR=$PWD > > >./config/configure.py --with-cc=gcc --with-fc=g77 > > --download-f-blas-lapack=1 --download-mpich=1 > > >make all test > > > > A first test with a modified ex2.c from the tutorials > > ("mpiexec -np 4 ./ex2") was successful, but > > showed that all processes run on the local host, as no > > machine file is given. > > > > Trying to run: > > >mpiexec -np 4 -machinefile > > /opt/mpich/share/machines.LINUX ./ex2 > > produces this error: > > >invalid mpiexec argument -machinefile > > >Usage: mpiexec -usize -maxtime -exitinfo -l\ > > > -n -soft -host \ > > > ... > > > > How can I specify the hosts which the mpich (downloaded > > during the PETSc installation) is supposed to use? > > > > Thanks > > Karl > > > > _________________________________________________________________ > > Explore the seven wonders of the world > > http://search.msn.com/results.aspx?q=7+wonders+world&mkt=en-US&form=QBRE > > > > > From balay at mcs.anl.gov Wed Sep 3 08:13:01 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 3 Sep 2008 08:13:01 -0500 (CDT) Subject: how to specify hosts to mpiexec? -machinefile doesn't work In-Reply-To: References: <395581.4929.qm@web52210.mail.re2.yahoo.com> Message-ID: On Wed, 3 Sep 2008, Karl Zeilinger wrote: > (BTW, is there any beginner tutorial which explains the different > lam/mpich/mpich-2 systems, and why I need to lamboot or mpdboot on > some of them, but not on others?) I'm Not sure if there is a document comparing different startup mechanisms of individual MPI impls. However each distro might have a doc explaing its supported startup mechanisms. Satish From a.peyser at umiami.edu Thu Sep 4 22:06:36 2008 From: a.peyser at umiami.edu (Alex Peyser) Date: Thu, 4 Sep 2008 23:06:36 -0400 Subject: MPIDense row distribution on transpose. Message-ID: <200809042306.41079.a.peyser@umiami.edu> All, In petsc-2.3.3_p11, when I do a transpose on an MPIAIJ, the MatTranspose conserves the row distribution pattern of the parent, so a 5x5 matrix transposed can still be multiplied by a 5-vector that the parent could be multiplied by (0-1,2-4 for everyone). However, with an MPIDENSE, I get a row rearrangement so that mpi process 1,2 now have (0-2, 3-4), which leads to nonconforming object size errors. Is this a known bug in MPIDense ? Is there a workaround - some way to force the matrix arrangement on tranpose for an MPIDense? I see that in MatTranspose_MPIAIJ, MatSetSizes is called with (A->cmap.n, A->rmap.n, N, M), which conserves my layout, while in MatTranspose_MPIDense, MatSetSizes is called with (PETSC_DECIDE, PETSC_DECIDE, N, M), which then ruins my layout. The former would appear to me the correct sequence. Regards, Alex Peyser -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: This is a digitally signed message part. URL: From fernandez858 at gmail.com Fri Sep 5 06:50:34 2008 From: fernandez858 at gmail.com (Michel Cancelliere) Date: Fri, 5 Sep 2008 13:50:34 +0200 Subject: Matrix blocks Message-ID: <7f18de3b0809050450oa85277ev9772d64b077cfe6e@mail.gmail.com> Hi All, I am building a shell preconditioner, for implement that I should divide a sparse matrix into four blocks of dimension N/2xN/2 (N size of the Matrix), I took a look to src/mat/examples/tutorials/ex2 but it is for dense matrix. There is a function for do that in Petsc? or another method for efficiently implementation? Thank you, Michel Cancelliere -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Sep 5 07:42:39 2008 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 5 Sep 2008 07:42:39 -0500 Subject: Matrix blocks In-Reply-To: <7f18de3b0809050450oa85277ev9772d64b077cfe6e@mail.gmail.com> References: <7f18de3b0809050450oa85277ev9772d64b077cfe6e@mail.gmail.com> Message-ID: You can extract matrix blocks using MatGetSubMatrix(). Matt On Fri, Sep 5, 2008 at 6:50 AM, Michel Cancelliere wrote: > Hi All, > > I am building a shell preconditioner, for implement that I should divide a > sparse matrix into four blocks of dimension N/2xN/2 (N size of the Matrix), > I took a look to src/mat/examples/tutorials/ex2 but it is for dense matrix. > There is a function for do that in Petsc? or another method for efficiently > implementation? > > Thank you, > > Michel Cancelliere > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From z.sheng at ewi.tudelft.nl Mon Sep 8 09:44:31 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Mon, 08 Sep 2008 16:44:31 +0200 Subject: setting complex numbers In-Reply-To: References: <7f18de3b0809050450oa85277ev9772d64b077cfe6e@mail.gmail.com> Message-ID: <48C53A4F.9020702@ewi.tudelft.nl> Dear all I am now working with the complex version of Petsc but have some serious problems... It turned out that I could not set the imaginary part of a PetscScalar. Check the following code: int main(int argc, char *argv[]) { #if !defined(PETSC_USE_COMPLEX) SETERRQ(1,"This example requires complex numbers"); #endif PetscScalar ee = 3.0*PETSC_i; PetscScalar x = 4.0 + ee; std::cout< References: <7f18de3b0809050450oa85277ev9772d64b077cfe6e@mail.gmail.com> <48C53A4F.9020702@ewi.tudelft.nl> Message-ID: You have not called PetscInitialize() so PETSC_i has not been defined. Matt On Mon, Sep 8, 2008 at 9:44 AM, zhifeng sheng wrote: > Dear all > > I am now working with the complex version of Petsc but have some serious > problems... > > It turned out that I could not set the imaginary part of a PetscScalar. > Check the following code: > > int main(int argc, char *argv[]) > { > #if !defined(PETSC_USE_COMPLEX) > SETERRQ(1,"This example requires complex numbers"); > #endif > PetscScalar ee = 3.0*PETSC_i; > PetscScalar x = 4.0 + ee; > std::cout< } > > after compiling and run, it produces > > > 4 0 > > which means that the imaginary part is set to be zero.... > > does anyone have this problem before, or could you please tell me what I did > wrong in this piece of code. > > Thanks a lot > Best regards > Zhifeng Sheng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From z.sheng at ewi.tudelft.nl Mon Sep 8 10:15:07 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Mon, 08 Sep 2008 17:15:07 +0200 Subject: setting complex numbers In-Reply-To: References: <7f18de3b0809050450oa85277ev9772d64b077cfe6e@mail.gmail.com> <48C53A4F.9020702@ewi.tudelft.nl> Message-ID: <48C5417B.2030003@ewi.tudelft.nl> exactly :) thanks a lot. Matthew Knepley wrote: > You have not called PetscInitialize() so PETSC_i has not been defined. > > Matt > > On Mon, Sep 8, 2008 at 9:44 AM, zhifeng sheng wrote: > >> Dear all >> >> I am now working with the complex version of Petsc but have some serious >> problems... >> >> It turned out that I could not set the imaginary part of a PetscScalar. >> Check the following code: >> >> int main(int argc, char *argv[]) >> { >> #if !defined(PETSC_USE_COMPLEX) >> SETERRQ(1,"This example requires complex numbers"); >> #endif >> PetscScalar ee = 3.0*PETSC_i; >> PetscScalar x = 4.0 + ee; >> std::cout<> } >> >> after compiling and run, it produces >> >> >> 4 0 >> >> which means that the imaginary part is set to be zero.... >> >> does anyone have this problem before, or could you please tell me what I did >> wrong in this piece of code. >> >> Thanks a lot >> Best regards >> Zhifeng Sheng >> >> >> > > > > From m.schauer at tu-bs.de Mon Sep 8 10:19:48 2008 From: m.schauer at tu-bs.de (Marco Schauer) Date: Mon, 08 Sep 2008 17:19:48 +0200 Subject: setting complex numbers In-Reply-To: <48C53A4F.9020702@ewi.tudelft.nl> References: <7f18de3b0809050450oa85277ev9772d64b077cfe6e@mail.gmail.com> <48C53A4F.9020702@ewi.tudelft.nl> Message-ID: <48C54294.1030106@tu-bs.de> try this int main(int argc, char *argv[]) { #if !defined(PETSC_USE_COMPLEX) SETERRQ(1,"This example requires complex numbers"); #endif PetscInitialize(&argc,&argv,0,help); PetscScalar ee = 3.0*PETSC_i; PetscScalar x = 4.0 + ee; std::cout<<" "< Dear all > > I am now working with the complex version of Petsc but have some > serious problems... > > It turned out that I could not set the imaginary part of a > PetscScalar. Check the following code: > > int main(int argc, char *argv[]) > { > #if !defined(PETSC_USE_COMPLEX) > SETERRQ(1,"This example requires complex numbers"); > #endif > PetscScalar ee = 3.0*PETSC_i; > PetscScalar x = 4.0 + ee; > std::cout< } > > after compiling and run, it produces > > > 4 0 > > which means that the imaginary part is set to be zero.... > > does anyone have this problem before, or could you please tell me what > I did wrong in this piece of code. > > Thanks a lot > Best regards > Zhifeng Sheng > From bsmith at mcs.anl.gov Mon Sep 8 14:01:22 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 8 Sep 2008 14:01:22 -0500 Subject: MPIDense row distribution on transpose. In-Reply-To: <200809042306.41079.a.peyser@umiami.edu> References: <200809042306.41079.a.peyser@umiami.edu> Message-ID: <7A5C852F-422D-49FA-93C3-A6B85D17A14A@mcs.anl.gov> Alex, This is really a bug, we should consistently using use the transpose of the given values (I guess). I have pushed a fix for petsc-dev. You can fix your copy of petsc-2.3.3 by simply sticking in the code you indicated from MatTranspose_MPIAIJ() Barry On Sep 4, 2008, at 10:06 PM, Alex Peyser wrote: > All, > > In petsc-2.3.3_p11, when I do a transpose on an MPIAIJ, the > MatTranspose > conserves the row distribution pattern of the parent, so a 5x5 matrix > transposed can still be multiplied by a 5-vector that the parent > could be > multiplied by (0-1,2-4 for everyone). However, with an MPIDENSE, I > get a row > rearrangement so that mpi process 1,2 now have (0-2, 3-4), which > leads to > nonconforming object size errors. > > Is this a known bug in MPIDense ? Is there a workaround - some way > to force > the matrix arrangement on tranpose for an MPIDense? > > I see that in MatTranspose_MPIAIJ, MatSetSizes is called > with (A->cmap.n, A->rmap.n, N, M), which conserves my layout, > while in MatTranspose_MPIDense, MatSetSizes is called > with (PETSC_DECIDE, PETSC_DECIDE, N, M), which then ruins my layout. > The > former would appear to me the correct sequence. > > Regards, > Alex Peyser From tsjb00 at hotmail.com Tue Sep 9 00:05:44 2008 From: tsjb00 at hotmail.com (tsjb00) Date: Tue, 9 Sep 2008 05:05:44 +0000 Subject: questions about TS functions In-Reply-To: References: Message-ID: Hi! I have some questions about TS functions. 1. I tried to use TSDefaultComputeJacobian to provide the Jacobian matrix. It worked fine with TS_BEULER, but when I tried to use TS_CN, it didn't work. The error message indicates that it is not supported for Crank-Nicholson. Is it true or I might do something wrong? 2. Is there a specific PETSc function to evaluate Jacobian for TS_CN? Any example of using TS_CN that I can refer to? 3. I tried to solve a simple time evolving diffusion problem with TS, using TS_BEULER and non-linear solver. When I tried to use big time step, sever over-prediction is observed. I would appreciate any suggestion on the criteria of maximum time step allowed. If I really need to use big time steps, any tips to set up TS to improve the results? Many thanks in advance! BJ _________________________________________________________________ ?????????????????????????? http://im.live.cn/Share/18.htm -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Sep 9 06:48:36 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 9 Sep 2008 06:48:36 -0500 Subject: questions about TS functions In-Reply-To: References: Message-ID: 2008/9/9 tsjb00 : > Hi! I have some questions about TS functions. > > 1. I tried to use TSDefaultComputeJacobian to provide the Jacobian matrix. > It worked fine with TS_BEULER, but when I tried to use TS_CN, it didn't > work. The error message indicates that it is not supported for > Crank-Nicholson. Is it true or I might do something wrong? Please always send the complete error message. That default just uses finite differences to compute the Jacobian. However, for nonlinear problems with changing Jacobians, we manipulate the matrix directly in CN. In order to use the FD code, we would need to recode this using a MatShell. > 2. Is there a specific PETSc function to evaluate Jacobian for TS_CN? Any > example of using TS_CN that I can refer to? No. You would need to form the Jacobian yourself. > 3. I tried to solve a simple time evolving diffusion problem with TS, using > TS_BEULER and non-linear solver. When I tried to use big time step, sever > over-prediction is observed. I would appreciate any suggestion on the > criteria of maximum time step allowed. If I really need to use big time > steps, any tips to set up TS to improve the results? Its not clear from your description whether this is a stability or accuracy problem. Matt > Many thanks in advance! > > BJ > > ________________________________ > ?????MSN?????"???" ????? -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From fernandez858 at gmail.com Tue Sep 9 07:05:38 2008 From: fernandez858 at gmail.com (Michel Cancelliere) Date: Tue, 9 Sep 2008 14:05:38 +0200 Subject: Matrix blocks In-Reply-To: References: <7f18de3b0809050450oa85277ev9772d64b077cfe6e@mail.gmail.com> Message-ID: <7f18de3b0809090505q55389d61y567c415ade6c5e4a@mail.gmail.com> Ok, I'm using MatGetSubMatrix() with the following piece of code PetscErrorCode SampleShellPCSetUp(SampleShellPC *shell,Mat pmat,Vec x) { Mat *submat[4]; PetscErrorCode ierr; PetscInt N; IS set1[4],set2[4]; ierr = VecGetSize(x,&N);CHKERRQ(ierr); ierr = ISCreateStride(PETSC_COMM_SELF,N/2,0,1,&set1[0]);CHKERRQ(ierr); ierr = ISCreateStride(PETSC_COMM_SELF,N/2,0,1,&set2[0]);CHKERRQ(ierr); ierr = ISCreateStride(PETSC_COMM_SELF,N/2,0,1,&set1[1]);CHKERRQ(ierr); ierr = ISCreateStride(PETSC_COMM_SELF,N/2,N/2,1,&set2[1]);CHKERRQ(ierr); ierr = ISCreateStride(PETSC_COMM_SELF,N/2,N/2,1,&set1[2]);CHKERRQ(ierr); ierr = ISCreateStride(PETSC_COMM_SELF,N/2,0,1,&set2[2]);CHKERRQ(ierr); ierr = ISCreateStride(PETSC_COMM_SELF,N/2,N/2,1,&set1[3]);CHKERRQ(ierr); ierr = ISCreateStride(PETSC_COMM_SELF,N/2,N/2,1,&set2[3]);CHKERRQ(ierr); ierr = MatGetSubMatrices(pmat,4,set1,set2,MAT_INITIAL_MATRIX,submat);CHKERRQ(ierr); but it seems to get only the first of the subblocks, am I using the correct syntax for the MatGetSubMatrices function? Thank you On Fri, Sep 5, 2008 at 2:42 PM, Matthew Knepley wrote: > You can extract matrix blocks using MatGetSubMatrix(). > > Matt > > On Fri, Sep 5, 2008 at 6:50 AM, Michel Cancelliere > wrote: > > Hi All, > > > > I am building a shell preconditioner, for implement that I should divide > a > > sparse matrix into four blocks of dimension N/2xN/2 (N size of the > Matrix), > > I took a look to src/mat/examples/tutorials/ex2 but it is for dense > matrix. > > There is a function for do that in Petsc? or another method for > efficiently > > implementation? > > > > Thank you, > > > > Michel Cancelliere > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Sep 9 07:26:26 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 9 Sep 2008 07:26:26 -0500 Subject: Matrix blocks In-Reply-To: <7f18de3b0809090505q55389d61y567c415ade6c5e4a@mail.gmail.com> References: <7f18de3b0809050450oa85277ev9772d64b077cfe6e@mail.gmail.com> <7f18de3b0809090505q55389d61y567c415ade6c5e4a@mail.gmail.com> Message-ID: From the manual page: This routine creates the matrices in submat; you should NOT create them before calling it. It also allocates the array of matrix pointers submat. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You should declare Mat *submat; // no 4 here then pass &submat into the function call Also note note from the manual page: MatGetSubMatrices() can extract ONLY sequential submatrices (from both sequential and parallel matrices). Use MatGetSubMatrix() to extract a parallel submatrix. Perhaps you really want to use MatGetSubMatrix()? Or perhaps you want to use the PCFIELDSPLIT preconditioner that is in petsc-dev http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html Barry On Sep 9, 2008, at 7:05 AM, Michel Cancelliere wrote: > Ok, I'm using MatGetSubMatrix() with the following piece of code > > PetscErrorCode SampleShellPCSetUp(SampleShellPC *shell,Mat pmat,Vec x) > { > Mat *submat[4]; > PetscErrorCode ierr; > PetscInt N; > IS set1[4],set2[4]; > > > > ierr = VecGetSize(x,&N);CHKERRQ(ierr); > ierr = ISCreateStride(PETSC_COMM_SELF,N/ > 2,0,1,&set1[0]);CHKERRQ(ierr); > ierr = ISCreateStride(PETSC_COMM_SELF,N/ > 2,0,1,&set2[0]);CHKERRQ(ierr); > ierr = ISCreateStride(PETSC_COMM_SELF,N/ > 2,0,1,&set1[1]);CHKERRQ(ierr); > ierr = ISCreateStride(PETSC_COMM_SELF,N/2,N/ > 2,1,&set2[1]);CHKERRQ(ierr); > ierr = ISCreateStride(PETSC_COMM_SELF,N/2,N/ > 2,1,&set1[2]);CHKERRQ(ierr); > ierr = ISCreateStride(PETSC_COMM_SELF,N/ > 2,0,1,&set2[2]);CHKERRQ(ierr); > ierr = ISCreateStride(PETSC_COMM_SELF,N/2,N/ > 2,1,&set1[3]);CHKERRQ(ierr); > ierr = ISCreateStride(PETSC_COMM_SELF,N/2,N/ > 2,1,&set2[3]);CHKERRQ(ierr); > ierr = MatGetSubMatrices(pmat, > 4,set1,set2,MAT_INITIAL_MATRIX,submat);CHKERRQ(ierr); > > but it seems to get only the first of the subblocks, > > am I using the correct syntax for the MatGetSubMatrices function? > > Thank you > > On Fri, Sep 5, 2008 at 2:42 PM, Matthew Knepley > wrote: > You can extract matrix blocks using MatGetSubMatrix(). > > Matt > > On Fri, Sep 5, 2008 at 6:50 AM, Michel Cancelliere > wrote: > > Hi All, > > > > I am building a shell preconditioner, for implement that I should > divide a > > sparse matrix into four blocks of dimension N/2xN/2 (N size of the > Matrix), > > I took a look to src/mat/examples/tutorials/ex2 but it is for > dense matrix. > > There is a function for do that in Petsc? or another method for > efficiently > > implementation? > > > > Thank you, > > > > Michel Cancelliere > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > From tsjb00 at hotmail.com Tue Sep 9 12:22:54 2008 From: tsjb00 at hotmail.com (tsjb00) Date: Tue, 9 Sep 2008 17:22:54 +0000 Subject: questions about TS functions Message-ID: Many thanks for your help! 1. The Error message I get is: [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: No support for this operation for this object type! [0]PETSC ERROR: The code for Crank-Nicholson is not complete emai petsc-maint at mcs.anl.gov for more info! [0]PETSC ERROR: ------------------------------------------------------------------------ ............................................................................................ [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: TSCreate_CN() line 383 in src/ts/impls/implicit/cn/cn.c [0]PETSC ERROR: TSSetType() line 74 in src/ts/interface/tsreg.c [0]PETSC ERROR: main() line 183 in diffVP.c [1]PETSC ERROR: --------------------------------------------------------------------- .............................................................................................. [1]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [1]PETSC ERROR: INSTEAD the line number of the start of the function [1]PETSC ERROR: is given. [1]PETSC ERROR: [1] PetscTraceBackErrorHandler line 120 src/sys/error/errtrace.c [1]PETSC ERROR: [1] PetscError line 454 src/sys/error/err.c [1]PETSC ERROR: [1] TSCreate_CN line 353 src/ts/impls/implicit/cn/cn.c [1]PETSC ERROR: [1] TSSetType line 52 src/ts/interface/tsreg.c [1]PETSC ERROR: --------------------- Error Message ------------------------------------ Does this mean that I could not use TSDefaultComputeJacobian if after that I define: TSSetType(ts,TS_CN) 3. I use TS_BEULER for time advancing, a second-order spacial discretization, with flux evaluated as: f(i+1/2)={ u(i+1)-u(i) }/dx, du/dt={ f(i+1/2)-f(i-1/2) } * DiffisionCoef. /dx u = u0 at x=0 as the b.c. When big timesteps are used, the diffusion happens very quickly. u is over-predicted over the domain. I am wondering if PETSC has options to check the convergence in time advancing? Any PETSc outputs to help me I decide if the overprediction is due to stabilization or precision issue? If it is a matter of precision, what higher-precision implicit methods are available in PETSc? Again, any criteria or equations to calculate the maximum timestep? Sorry for the lengthy quesions. PETSc is new to me and I am very clueless about the code. Many thanks! BJ 2008/9/9 tsjb00 : > Hi! I have some questions about TS functions. > > 1. I tried to use TSDefaultComputeJacobian to provide the Jacobian matrix. > It worked fine with TS_BEULER, but when I tried to use TS_CN, it didn't > work. The error message indicates that it is not supported for > Crank-Nicholson. Is it true or I might do something wrong? Please always send the complete error message. That default just uses finite differences to compute the Jacobian. However, for nonlinear problems with changing Jacobians, we manipulate the matrix directly in CN. In order to use the FD code, we would need to recode this using a MatShell. > 2. Is there a specific PETSc function to evaluate Jacobian for TS_CN? Any > example of using TS_CN that I can refer to? No. You would need to form the Jacobian yourself. > 3. I tried to solve a simple time evolving diffusion problem with TS, using > TS_BEULER and non-linear solver. When I tried to use big time step, sever > over-prediction is observed. I would appreciate any suggestion on the > criteria of maximum time step allowed. If I really need to use big time > steps, any tips to set up TS to improve the results? Its not clear from your description whether this is a stability or accuracy problem. Matt > Many thanks in advance! > > BJ _________________________________________________________________ MSN ???????????????????? http://cn.msn.com From knepley at gmail.com Tue Sep 9 12:26:40 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 9 Sep 2008 12:26:40 -0500 Subject: questions about TS functions In-Reply-To: References: Message-ID: 2008/9/9 tsjb00 : > > Many thanks for your help! > > 1. The Error message I get is: > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: No support for this operation for this object type! > [0]PETSC ERROR: The code for Crank-Nicholson is not complete > emai petsc-maint at mcs.anl.gov for more info! > Does this mean that I could not use TSDefaultComputeJacobian if after that I define: > TSSetType(ts,TS_CN) Yes. > 3. I use TS_BEULER for time advancing, a second-order spacial discretization, with flux evaluated as: > f(i+1/2)={ u(i+1)-u(i) }/dx, > du/dt={ f(i+1/2)-f(i-1/2) } * DiffisionCoef. /dx > u = u0 at x=0 as the b.c. > When big timesteps are used, the diffusion happens very quickly. u is over-predicted over the domain. > > I am wondering if PETSC has options to check the convergence in time advancing? Any PETSc outputs to help me I decide if the overprediction is due to stabilization or precision issue? If it is a matter of precision, what higher-precision implicit methods are available in PETSc? Again, any criteria or equations to calculate the maximum timestep? > > Sorry for the lengthy quesions. PETSc is new to me and I am very clueless about the code. No, all the timestepping methods are very elementary. You can use the SUNDIALS package (--download-sundials) for more advanced methods. However, they might require a Jacobian. Matt > Many thanks! > > BJ > > > > 2008/9/9 tsjb00 : >> Hi! I have some questions about TS functions. >> >> 1. I tried to use TSDefaultComputeJacobian to provide the Jacobian matrix. >> It worked fine with TS_BEULER, but when I tried to use TS_CN, it didn't >> work. The error message indicates that it is not supported for >> Crank-Nicholson. Is it true or I might do something wrong? > > Please always send the complete error message. That default just uses finite > differences to compute the Jacobian. However, for nonlinear problems with > changing Jacobians, we manipulate the matrix directly in CN. In order to use > the FD code, we would need to recode this using a MatShell. > >> 2. Is there a specific PETSc function to evaluate Jacobian for TS_CN? Any >> example of using TS_CN that I can refer to? > > No. You would need to form the Jacobian yourself. > >> 3. I tried to solve a simple time evolving diffusion problem with TS, using >> TS_BEULER and non-linear solver. When I tried to use big time step, sever >> over-prediction is observed. I would appreciate any suggestion on the >> criteria of maximum time step allowed. If I really need to use big time >> steps, any tips to set up TS to improve the results? > > Its not clear from your description whether this is a stability or > accuracy problem. > > Matt > >> Many thanks in advance! >> >> BJ > _________________________________________________________________ > MSN ???????????????????? > http://cn.msn.com > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From niko.karin at gmail.com Wed Sep 10 15:39:14 2008 From: niko.karin at gmail.com (Nicolas Tardieu) Date: Wed, 10 Sep 2008 16:39:14 -0400 Subject: The MatMatSolve mystery Message-ID: Dear PETSc users, The attached file is a naive test of MatMatSolve : I create a matrix,say A, I duplicate it, say BB, and I would like to compute A^{-1}*BB. Since A is the identity matrix, I would like to get the identity matrix. But I don't! Here is what I obtain by doing "./niko1f -n 3" : =========================================================================== A before MatMatSolve row 0: (0, 1) row 1: (1, 1) row 2: (2, 1) BB before MatMatSolve row 0: (0, 1) row 1: (1, 1) row 2: (2, 1) XX before MatMatSolve 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 XX after MatMatSolve 1.0000000000000000e+00 1.9966503766457982e-314 7.9050503334599447e-323 1.0000000000000000e+00 0.0000000000000000e+00 6.4996731077266291e-311 1.0000000000000000e+00 2.6136072665001942e-321 9.6616869034104132e-317 =========================================================================== Can you tell me what I am doing wrong? Thanks, Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: niko1f.F Type: text/x-fortran Size: 3753 bytes Desc: not available URL: From bsmith at mcs.anl.gov Wed Sep 10 15:54:44 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 10 Sep 2008 15:54:44 -0500 Subject: The MatMatSolve mystery In-Reply-To: References: Message-ID: For the MatMatSolve_SeqAIJ() to work both B and X must be dense matrices, since you passed in a sparse SeqAIJ matrix (the default) for B you get garbage out. I will add an error check telling you the B matrix is not dense so no one will fall for this again. Barry On Sep 10, 2008, at 3:39 PM, Nicolas Tardieu wrote: > Dear PETSc users, > > The attached file is a naive test of MatMatSolve : I create a > matrix,say A, I duplicate it, say BB, and I would like to compute > A^{-1}*BB. > Since A is the identity matrix, I would like to get the identity > matrix. But I don't! > Here is what I obtain by doing "./niko1f -n 3" : > > = > = > = > = > = > ====================================================================== > A before MatMatSolve > row 0: (0, 1) > row 1: (1, 1) > row 2: (2, 1) > BB before MatMatSolve > row 0: (0, 1) > row 1: (1, 1) > row 2: (2, 1) > XX before MatMatSolve > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > XX after MatMatSolve > 1.0000000000000000e+00 1.9966503766457982e-314 7.9050503334599447e-323 > 1.0000000000000000e+00 0.0000000000000000e+00 6.4996731077266291e-311 > 1.0000000000000000e+00 2.6136072665001942e-321 9.6616869034104132e-317 > = > = > = > = > = > ====================================================================== > > > Can you tell me what I am doing wrong? > > Thanks, > > Nicolas > > From niko.karin at gmail.com Wed Sep 10 16:18:41 2008 From: niko.karin at gmail.com (Nicolas Tardieu) Date: Wed, 10 Sep 2008 17:18:41 -0400 Subject: The MatMatSolve mystery In-Reply-To: References: Message-ID: Thanks a lot! I have changed the code but I still have strange behaviour. I scaled the identity matrix and enlarged the BB matrix : ================================================================================== A before MatMatSolve row 0: (0, 1e-06) row 1: (1, 1e-06) BB before MatMatSolve 0.0000000000000000e+00 0.0000000000000000e+00 1.0000000000000000e+03 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 1.0000000000000000e+03 XX before MatMatSolve 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 XX after MatMatSolve 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 ================================================================================== Nicolas 2008/9/10 Barry Smith > > For the MatMatSolve_SeqAIJ() to work both B and X must be dense matrices, > since you passed in a sparse SeqAIJ matrix (the default) for B you get > garbage out. > I will add an error check telling you the B matrix is not dense so no one > will fall for this > again. > > Barry > > > > On Sep 10, 2008, at 3:39 PM, Nicolas Tardieu wrote: > > Dear PETSc users, >> >> The attached file is a naive test of MatMatSolve : I create a matrix,say >> A, I duplicate it, say BB, and I would like to compute A^{-1}*BB. >> Since A is the identity matrix, I would like to get the identity matrix. >> But I don't! >> Here is what I obtain by doing "./niko1f -n 3" : >> >> >> =========================================================================== >> A before MatMatSolve >> row 0: (0, 1) >> row 1: (1, 1) >> row 2: (2, 1) >> BB before MatMatSolve >> row 0: (0, 1) >> row 1: (1, 1) >> row 2: (2, 1) >> XX before MatMatSolve >> 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 >> 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 >> 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 >> XX after MatMatSolve >> 1.0000000000000000e+00 1.9966503766457982e-314 7.9050503334599447e-323 >> 1.0000000000000000e+00 0.0000000000000000e+00 6.4996731077266291e-311 >> 1.0000000000000000e+00 2.6136072665001942e-321 9.6616869034104132e-317 >> >> =========================================================================== >> >> >> Can you tell me what I am doing wrong? >> >> Thanks, >> >> Nicolas >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: niko1f.F Type: text/x-fortran Size: 4090 bytes Desc: not available URL: From bsmith at mcs.anl.gov Wed Sep 10 16:31:32 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 10 Sep 2008 16:31:32 -0500 Subject: The MatMatSolve mystery In-Reply-To: References: Message-ID: Satish, Could you please make a patch to fix this. The loop for (neq=0; neqcmap.n; neq++){ Thanks Barry On Sep 10, 2008, at 4:18 PM, Nicolas Tardieu wrote: > Thanks a lot! > I have changed the code but I still have strange behaviour. > I scaled the identity matrix and enlarged the BB matrix : > > = > = > = > = > = > = > = > = > = > = > = > = > ====================================================================== > A before MatMatSolve > row 0: (0, 1e-06) > row 1: (1, 1e-06) > BB before MatMatSolve > 0.0000000000000000e+00 0.0000000000000000e+00 1.0000000000000000e+03 > 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 1.0000000000000000e+03 > XX before MatMatSolve > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 > XX after MatMatSolve > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 > = > = > = > = > = > = > = > = > = > = > = > = > ====================================================================== > > Nicolas > > > 2008/9/10 Barry Smith > > For the MatMatSolve_SeqAIJ() to work both B and X must be dense > matrices, > since you passed in a sparse SeqAIJ matrix (the default) for B you > get garbage out. > I will add an error check telling you the B matrix is not dense so > no one will fall for this > again. > > Barry > > > > On Sep 10, 2008, at 3:39 PM, Nicolas Tardieu wrote: > > Dear PETSc users, > > The attached file is a naive test of MatMatSolve : I create a > matrix,say A, I duplicate it, say BB, and I would like to compute > A^{-1}*BB. > Since A is the identity matrix, I would like to get the identity > matrix. But I don't! > Here is what I obtain by doing "./niko1f -n 3" : > > = > = > = > = > = > ====================================================================== > A before MatMatSolve > row 0: (0, 1) > row 1: (1, 1) > row 2: (2, 1) > BB before MatMatSolve > row 0: (0, 1) > row 1: (1, 1) > row 2: (2, 1) > XX before MatMatSolve > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > XX after MatMatSolve > 1.0000000000000000e+00 1.9966503766457982e-314 7.9050503334599447e-323 > 1.0000000000000000e+00 0.0000000000000000e+00 6.4996731077266291e-311 > 1.0000000000000000e+00 2.6136072665001942e-321 9.6616869034104132e-317 > = > = > = > = > = > ====================================================================== > > > Can you tell me what I am doing wrong? > > Thanks, > > Nicolas > > > > > From tsjb00 at hotmail.com Thu Sep 11 18:47:01 2008 From: tsjb00 at hotmail.com (tsjb00) Date: Thu, 11 Sep 2008 23:47:01 +0000 Subject: About SNES solver of TS objects Message-ID: Hi! I use TS object to solve a non-linear problem du/dt=f(u,t). I would like to check the non-linear solver performance in TSStep. I tried to output the residuals by setting a monitor. So far my attempt didn't work. Please let me know if there are functions that calculate residuals of iterations in each TS time step. If not, how can I output the solver iteration/residuals correctly while using TS objects. Many thanks! P.S. The following are the code I tried: PetscViewerASCIIOpen(PETSC_COMM_WORLD,"snes.log",&rviewer); ierr = TSGetSNES(ts,&ts_snes); ierr = SNESMonitorSet(ts_snes,SNESMonitorDefault,&rviewer,PETSC_NULL); I did get residual output in snes.log, but the format was very weird. Each line started with countless blank/space and at the very end was the SNES iteration info. As a result, the file took a lot of memory space even with one line of outputs. I also tried: PetscViewerASCIIMonitorCreate(PETSC_COMM_WORLD,"snes.log",0,&rviewer); ierr = TSGetSNES(ts,&ts_snes); ierr = SNESMonitorSet(ts_snes,SNESMonitorDefault,&rviewer,PETSC_NULL); which didn't work at all. _________________________________________________________________ ?????MSN?????????? http://im.live.cn/click/ From hzhang at mcs.anl.gov Fri Sep 12 11:46:36 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Fri, 12 Sep 2008 11:46:36 -0500 (CDT) Subject: About SNES solver of TS objects In-Reply-To: References: Message-ID: Can you send us a simplified code on what you've tried? What ts_type do you use? Hong On Thu, 11 Sep 2008, tsjb00 wrote: > > Hi! > > I use TS object to solve a non-linear problem du/dt=f(u,t). I would like to check the non-linear solver performance in TSStep. I tried to output the residuals by setting a monitor. So far my attempt didn't work. Please let me know if there are functions that calculate residuals of iterations in each TS time step. If not, how can I output the solver iteration/residuals correctly while using TS objects. > > Many thanks! > > P.S. The following are the code I tried: > > PetscViewerASCIIOpen(PETSC_COMM_WORLD,"snes.log",&rviewer); > ierr = TSGetSNES(ts,&ts_snes); > ierr = SNESMonitorSet(ts_snes,SNESMonitorDefault,&rviewer,PETSC_NULL); > > I did get residual output in snes.log, but the format was very weird. Each line started with countless blank/space and at the very end was the SNES iteration info. As a result, the file took a lot of memory space even with one line of outputs. > > I also tried: > PetscViewerASCIIMonitorCreate(PETSC_COMM_WORLD,"snes.log",0,&rviewer); > ierr = TSGetSNES(ts,&ts_snes); > ierr = SNESMonitorSet(ts_snes,SNESMonitorDefault,&rviewer,PETSC_NULL); > which didn't work at all. > _________________________________________________________________ > ?????MSN?????????? > http://im.live.cn/click/ > > From Andrew.Barker at Colorado.EDU Fri Sep 12 15:09:04 2008 From: Andrew.Barker at Colorado.EDU (Andrew T Barker) Date: Fri, 12 Sep 2008 14:09:04 -0600 (MDT) Subject: changing array from ISGetIndices Message-ID: <20080912140904.AEH55363@batman.int.colorado.edu> The man page for ISGetIndices says "The user should NOT change the indices." But in this example http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/dm/ao/examples/tutorials/ex2.c.html around line 507 you do change indices you get from ISGetIndices. I'm trying to adapt this example to my own needs and am wondering if this may be causing me problems. In particular I go through this section of code twice (to partition two chunks of data) and the second time the ISCreateBlock makes a segfault. Thanks, Andrew --- Andrew T. Barker andrew.barker at colorado.edu Applied Math Department University of Colorado, Boulder 526 UCB, Boulder, CO 80309-0526 From balay at mcs.anl.gov Fri Sep 12 15:17:32 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 12 Sep 2008 15:17:32 -0500 (CDT) Subject: The MatMatSolve mystery In-Reply-To: References: Message-ID: petsc-2.3.3-p14 tarball now includes this patch. Satish On Wed, 10 Sep 2008, Barry Smith wrote: > > Satish, > > Could you please make a patch to fix this. The loop > for (neq=0; neq should be > for (neq=0; neqcmap.n; neq++){ > > Thanks > > Barry From bsmith at mcs.anl.gov Fri Sep 12 16:17:13 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 12 Sep 2008 16:17:13 -0500 Subject: changing array from ISGetIndices In-Reply-To: <20080912140904.AEH55363@batman.int.colorado.edu> References: <20080912140904.AEH55363@batman.int.colorado.edu> Message-ID: On Sep 12, 2008, at 3:09 PM, Andrew T Barker wrote: > > The man page for ISGetIndices says "The user should NOT change the > indices." But in this example > > http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/dm/ao/examples/tutorials/ex2.c.html > > around line 507 you do change indices you get from ISGetIndices. This is technically a bug, but since the IS is destroyed immediately afterwards, this should not cause a crash. > I'm trying to adapt this example to my own needs and am wondering > if this may be causing me problems. In particular I go through this > section of code twice (to partition two chunks of data) and the > second time the ISCreateBlock makes a segfault. Whether this causes the problem or not depends on exactly what you are doing, if you are use the same IS etc. Recommend simply running in the debugger to see why it is crashing. Run the program with -start_in_debugger Barry I will fix petsc-dev so that ISGetIndices() uses a const and so the code will automatically detect this illegal use. > > > Thanks, > > Andrew > > --- > Andrew T. Barker > andrew.barker at colorado.edu > Applied Math Department > University of Colorado, Boulder > 526 UCB, Boulder, CO 80309-0526 > > > From tsjb00 at hotmail.com Fri Sep 12 16:53:36 2008 From: tsjb00 at hotmail.com (tsjb00) Date: Fri, 12 Sep 2008 21:53:36 +0000 Subject: About SNES solver of TS objects Message-ID: Many thanks for your reply! The code I tried was very similar to the code in: ts/examples/tutorials/ex7.c except that I changed the r.h.s. function. The code related to TS: ierr = TSCreate(PETSC_COMM_WORLD,&ts);CHKERRQ(ierr); ierr = TSSetProblemType(ts,TS_NONLINEAR);CHKERRQ(ierr); ierr = TSSetRHSFunction(ts,FormFunction,&appctx);CHKERRQ(ierr); ierr = DAGetColoring(appctx.da,IS_COLORING_GLOBAL,&iscoloring);CHKERRQ(ierr); ierr = MatFDColoringCreate(J,iscoloring,&matfdcoloring);CHKERRQ(ierr); ierr = ISColoringDestroy(iscoloring);CHKERRQ(ierr); ierr = MatFDColoringSetFunction(matfdcoloring,(PetscErrorCode (*)(void))FormFunction,&appctx);CHKERRQ(ierr); ierr = MatFDColoringSetFromOptions(matfdcoloring);CHKERRQ(ierr); ierr = TSSetRHSJacobian(ts,J,J,TSDefaultComputeJacobianColor,matfdcoloring);CHKERRQ(ierr); dt = appctx.dtmin; ierr = TSSetInitialTimeStep(ts,0.0,dt);CHKERRQ(ierr); ierr = TSSetType(ts,TS_BEULER);CHKERRQ(ierr); Then I tried to add the snes monitor: PetscViewerASCIIOpen(PETSC_COMM_WORLD,"snes.log",&rviewer); ierr = TSGetSNES(ts,&ts_snes); ierr = SNESMonitorSet(ts_snes,SNESMonitorDefault,&rviewer,PETSC_NULL); I got the snes.log with long lines which started with countless blank/space and ended at the very end with the SNES iteration info. As a result, the file took a lot of memory space even with one line of outputs. Please let me know if I did something wrong. Have a nice weekend! _________________________________________________________________ ????MSN????????????????????????? http://mobile.msn.com.cn/ From fernandez858 at gmail.com Mon Sep 15 05:10:20 2008 From: fernandez858 at gmail.com (Michel Cancelliere) Date: Mon, 15 Sep 2008 12:10:20 +0200 Subject: Nested Factorization Message-ID: <7f18de3b0809150310y4ea46f46j56cccf234c676ab8@mail.gmail.com> Hi, I would like to now if the nested factorization (a preconditioning technique very popular in reservoir engineering) or something similar is implemented inside the PETSc? Thank you Michel -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Sep 15 10:54:50 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 15 Sep 2008 10:54:50 -0500 Subject: Nested Factorization In-Reply-To: <7f18de3b0809150310y4ea46f46j56cccf234c676ab8@mail.gmail.com> References: <7f18de3b0809150310y4ea46f46j56cccf234c676ab8@mail.gmail.com> Message-ID: <721E499A-28CF-44B1-B1E4-7003143491DF@mcs.anl.gov> Michel, Sorry we don't have anything like that. Barry On Sep 15, 2008, at 5:10 AM, Michel Cancelliere wrote: > Hi, > > I would like to now if the nested factorization (a preconditioning > technique very popular in reservoir engineering) or something > similar is implemented inside the PETSc? > > Thank you > > Michel From cjchen at math.msu.edu Mon Sep 15 13:39:58 2008 From: cjchen at math.msu.edu (Chen, Changjun) Date: Mon, 15 Sep 2008 14:39:58 -0400 Subject: Ask help for PETSc Message-ID: Hi Sir, My name is Changjun Chen. I am a postdoc in Michigan State University. Recently I use your perfect software PETSc. It is very good. But I still have one problem. I need your help. It is about the size of final executable. When I compile it, it is over 60 MB. This is too large. Maybe it have all the linear equation solvers. but among them, I only need one method, for example JACOBI method, I do not need others. So how could I compile the codes that only contains this method? Thanks Sincerely, Changjun From balay at mcs.anl.gov Mon Sep 15 14:30:02 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 15 Sep 2008 14:30:02 -0500 (CDT) Subject: Ask help for PETSc In-Reply-To: References: Message-ID: On Mon, 15 Sep 2008, Chen, Changjun wrote: > Hi Sir, > My name is Changjun Chen. I am a postdoc in Michigan State > University. Recently I use your perfect software PETSc. It is very > good. But I still have one problem. I need your help. > It is about the size of final executable. When I compile it, it is > over 60 MB. This is too large. Maybe it have all the linear equation > solvers. but among them, I only need one method, for example JACOBI > method, I do not need others. So how could I compile the codes that > only contains this method? Thanks You might have to replace the code in PCRegisterAll() to include only 1 PC type to do this. [similar change to other register routines.] However the whole goal of PETSc is to provide easy experimentation of various solvers at runtime. The primary reason the executable is huge is probably due to debug symbols. To verify - you can run 'strip' on the executable. Another thing you can do is build with the option '--with-shared=1'. This would leave most of the PETSc compiled in shared libraries - thus keeping the application executable very small. Satish From knepley at gmail.com Mon Sep 15 14:31:49 2008 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 15 Sep 2008 14:31:49 -0500 Subject: Ask help for PETSc In-Reply-To: References: Message-ID: On Mon, Sep 15, 2008 at 1:39 PM, Chen, Changjun wrote: > Hi Sir, > My name is Changjun Chen. I am a postdoc in Michigan State University. Recently I use your perfect software PETSc. It is very good. But I still have one problem. I need your help. > It is about the size of final executable. When I compile it, it is over 60 MB. This is too large. Maybe it have all the linear equation solvers. but among them, I only need one method, for example JACOBI method, I do not need others. > So how could I compile the codes that only contains this method? It has all the debugging information. You can shrink it using the UNIX 'strip' utility or by reconfiguring with --with-debugging=0. Matt > Thanks > > Sincerely, > Changjun -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From cjchen at math.msu.edu Mon Sep 15 16:20:55 2008 From: cjchen at math.msu.edu (Chen, Changjun) Date: Mon, 15 Sep 2008 17:20:55 -0400 Subject: Ask help for PETSc In-Reply-To: References: , Message-ID: Dear Satish: I have tried to rebuild the PETSc with option --with-shared=1, it works! The executable file reduces to less than 1 MB. Thank you very much! Sincerely, Changjun ________________________________________ From: owner-petsc-users at mcs.anl.gov [owner-petsc-users at mcs.anl.gov] On Behalf Of Satish Balay [balay at mcs.anl.gov] Sent: Monday, September 15, 2008 3:30 PM To: petsc-users at mcs.anl.gov Subject: Re: Ask help for PETSc On Mon, 15 Sep 2008, Chen, Changjun wrote: > Hi Sir, > My name is Changjun Chen. I am a postdoc in Michigan State > University. Recently I use your perfect software PETSc. It is very > good. But I still have one problem. I need your help. > It is about the size of final executable. When I compile it, it is > over 60 MB. This is too large. Maybe it have all the linear equation > solvers. but among them, I only need one method, for example JACOBI > method, I do not need others. So how could I compile the codes that > only contains this method? Thanks You might have to replace the code in PCRegisterAll() to include only 1 PC type to do this. [similar change to other register routines.] However the whole goal of PETSc is to provide easy experimentation of various solvers at runtime. The primary reason the executable is huge is probably due to debug symbols. To verify - you can run 'strip' on the executable. Another thing you can do is build with the option '--with-shared=1'. This would leave most of the PETSc compiled in shared libraries - thus keeping the application executable very small. Satish From fernandez858 at gmail.com Tue Sep 16 04:11:39 2008 From: fernandez858 at gmail.com (Michel Cancelliere) Date: Tue, 16 Sep 2008 11:11:39 +0200 Subject: VecLoad very time consuming Message-ID: <7f18de3b0809160211w2d07ac03ja1785f1817c4634b@mail.gmail.com> Hi, I have implemented a code for resolve with PETSc the linear system inside a Newton method writed in matlab, I am using the socket communication between Matlab and Petsc, but i get that about the 95% of the time is spent in the VecLoad() function. Is this behaviour normal? for the overhead time in the communication and writing and reading to/from binaries? I am attaching the log_summary Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: log_prova_prec_NR_64_NZ_1_DT_1.log Type: application/octet-stream Size: 8372 bytes Desc: not available URL: From bsmith at mcs.anl.gov Tue Sep 16 07:15:59 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Sep 2008 07:15:59 -0500 Subject: VecLoad very time consuming In-Reply-To: <7f18de3b0809160211w2d07ac03ja1785f1817c4634b@mail.gmail.com> References: <7f18de3b0809160211w2d07ac03ja1785f1817c4634b@mail.gmail.com> Message-ID: <44A6ED7B-034B-47EC-8440-066E44DADF2A@mcs.anl.gov> VecLoad 9845 1.0 4.7432e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e +00 0.0e+00 98 0 0 0 0 98 0 0 0 0 0 MatLoad 4922 1.0 6.7874e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e +00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 It is spending a huge amount of time in the VecLoad() but very little in the MatLoad(), this indicates to me that the time in VecLoad() is not actually moving the vector, it is time spent waiting for the vector to be ready. I suggest putting timers in the Matlab code to see where the Matlab codeis spending all its time, likely the Matlab code is really the one taking all the time. Barry On Sep 16, 2008, at 4:11 AM, Michel Cancelliere wrote: > Hi, > I have implemented a code for resolve with PETSc the linear system > inside a Newton method writed in matlab, I am using the socket > communication between Matlab and Petsc, but i get that about the 95% > of the time is spent in the VecLoad() function. Is this behaviour > normal? for the overhead time in the communication and writing and > reading to/from binaries? > > I am attaching the log_summary > > Thank you > From xmailerpro at gmx.net Tue Sep 16 07:26:42 2008 From: xmailerpro at gmx.net (Jean-Marc Haudin) Date: Tue, 16 Sep 2008 13:26:42 +0100 Subject: OpenFVM Message-ID: <20080916122643.4A36E348009@mailgw.mcs.anl.gov> OpenFVM is a general CFD solver released under the GPL license. It was developed to simulate the flow in complex 3D geometries. Therefore, the mesh can be unstructured and contain control volumes with arbritrary shape. The code uses the finite volume method to evaluate the partial differential equations. As well as solving the velocity and pressure fields, the code is capable of solving non-isothermal multiphase flow. The code has two implementations: serial and parallel. The serial version uses LASPACK as the linear matrix solver and the parallel one uses the PETSc library. Both implementations use the open source tool Gmsh for pre- and -post processing. http://openfvm.sourceforge.net/ ------ Gesendet mit X-Mailer .... http://sourceforge.net/projects/x-mailer/ From xmailerpro at gmx.net Tue Sep 16 07:26:42 2008 From: xmailerpro at gmx.net (Jean-Marc Haudin) Date: Tue, 16 Sep 2008 13:26:42 +0100 Subject: OpenFVM Message-ID: <20080916122643.4A36E348009@mailgw.mcs.anl.gov> OpenFVM is a general CFD solver released under the GPL license. It was developed to simulate the flow in complex 3D geometries. Therefore, the mesh can be unstructured and contain control volumes with arbritrary shape. The code uses the finite volume method to evaluate the partial differential equations. As well as solving the velocity and pressure fields, the code is capable of solving non-isothermal multiphase flow. The code has two implementations: serial and parallel. The serial version uses LASPACK as the linear matrix solver and the parallel one uses the PETSc library. Both implementations use the open source tool Gmsh for pre- and -post processing. http://openfvm.sourceforge.net/ ------ Gesendet mit X-Mailer .... http://sourceforge.net/projects/x-mailer/ From fernandez858 at gmail.com Tue Sep 16 08:15:28 2008 From: fernandez858 at gmail.com (Michel Cancelliere) Date: Tue, 16 Sep 2008 15:15:28 +0200 Subject: VecLoad very time consuming In-Reply-To: <44A6ED7B-034B-47EC-8440-066E44DADF2A@mcs.anl.gov> References: <7f18de3b0809160211w2d07ac03ja1785f1817c4634b@mail.gmail.com> <44A6ED7B-034B-47EC-8440-066E44DADF2A@mcs.anl.gov> Message-ID: <7f18de3b0809160615w5d08c91m11f07c7965cc8051@mail.gmail.com> Barry, On the matlab side the code spent almost all the time waiting for the solution (sent by Petsc), because the function PetscOpenSocket.read consumes more than the half of the all simulation. in attach I'm sending you a profile of the matlab code. is there some special format in which i can send to vector from matlab to petsc for a faster communication? Michel On Tue, Sep 16, 2008 at 2:15 PM, Barry Smith wrote: > > VecLoad 9845 1.0 4.7432e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 98 0 0 0 0 98 0 0 0 0 0 > > MatLoad 4922 1.0 6.7874e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > It is spending a huge amount of time in the VecLoad() but very little in > the MatLoad(), this indicates to me that > the time in VecLoad() is not actually moving the vector, it is time spent > waiting for the vector to be ready. I suggest > putting timers in the Matlab code to see where the Matlab codeis spending > all its time, likely the Matlab code > is really the one taking all the time. > > Barry > > > > On Sep 16, 2008, at 4:11 AM, Michel Cancelliere wrote: > > Hi, >> I have implemented a code for resolve with PETSc the linear system inside >> a Newton method writed in matlab, I am using the socket communication >> between Matlab and Petsc, but i get that about the 95% of the time is spent >> in the VecLoad() function. Is this behaviour normal? for the overhead time >> in the communication and writing and reading to/from binaries? >> >> I am attaching the log_summary >> >> Thank you >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot_Profiler.jpg Type: image/jpeg Size: 60675 bytes Desc: not available URL: From bsmith at mcs.anl.gov Tue Sep 16 10:17:05 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Sep 2008 10:17:05 -0500 Subject: VecLoad very time consuming In-Reply-To: <7f18de3b0809160615w5d08c91m11f07c7965cc8051@mail.gmail.com> References: <7f18de3b0809160211w2d07ac03ja1785f1817c4634b@mail.gmail.com> <44A6ED7B-034B-47EC-8440-066E44DADF2A@mcs.anl.gov> <7f18de3b0809160615w5d08c91m11f07c7965cc8051@mail.gmail.com> Message-ID: <49925CD2-DF9C-48CE-8C67-F3B4C3FAD7EC@mcs.anl.gov> Michel, Please send all your program, Matlab and C/Fortran with instructions on how to run it to petsc-maint at mcs.anl.gov I would like to run it, reproduce the problem and see if there is some solution. Something is VERY wrong. Barry On Sep 16, 2008, at 8:15 AM, Michel Cancelliere wrote: > Barry, > On the matlab side the code spent almost all the time waiting for > the solution (sent by Petsc), because the function > PetscOpenSocket.read consumes more than the half of the all > simulation. in attach I'm sending you a profile of the matlab code. > is there some special format in which i can send to vector from > matlab to petsc for a faster communication? > > Michel > On Tue, Sep 16, 2008 at 2:15 PM, Barry Smith > wrote: > > VecLoad 9845 1.0 4.7432e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 98 0 0 0 0 98 0 0 0 0 0 > > MatLoad 4922 1.0 6.7874e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > It is spending a huge amount of time in the VecLoad() but very > little in the MatLoad(), this indicates to me that > the time in VecLoad() is not actually moving the vector, it is time > spent waiting for the vector to be ready. I suggest > putting timers in the Matlab code to see where the Matlab codeis > spending all its time, likely the Matlab code > is really the one taking all the time. > > Barry > > > > On Sep 16, 2008, at 4:11 AM, Michel Cancelliere wrote: > > Hi, > I have implemented a code for resolve with PETSc the linear system > inside a Newton method writed in matlab, I am using the socket > communication between Matlab and Petsc, but i get that about the 95% > of the time is spent in the VecLoad() function. Is this behaviour > normal? for the overhead time in the communication and writing and > reading to/from binaries? > > I am attaching the log_summary > > Thank you > > > > From z.sheng at ewi.tudelft.nl Tue Sep 16 10:23:47 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Tue, 16 Sep 2008 17:23:47 +0200 Subject: symmetric reordering and incomplete factorization with tolerance? In-Reply-To: <20080916122643.4A36E348009@mailgw.mcs.anl.gov> References: <20080916122643.4A36E348009@mailgw.mcs.anl.gov> Message-ID: <48CFCF83.8060405@ewi.tudelft.nl> Dear all I used the reordering scheme in petsc, and I would like to know whether they are symmetric or not. I have a symmetric matrix (for 3D system) and I tried to solve it with petsc and with matlab. In petsc I used rcm + ILU + bicgstab. while in matlab I used symrcm+ICC+bicgstab. it seems that with symmetric reordering, the factorization is somewhat faster... So I am wondering if it is possible to apply a symmetric reordering? PS: in Matlab, an incomplete factorization can be done with a tolerance (e.g. ICC(1e-4) ), can I do something like that with Petsc? Thanks a lot Best regards Zhifeng Sheng From bsmith at mcs.anl.gov Tue Sep 16 10:37:00 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Sep 2008 10:37:00 -0500 Subject: symmetric reordering and incomplete factorization with tolerance? In-Reply-To: <48CFCF83.8060405@ewi.tudelft.nl> References: <20080916122643.4A36E348009@mailgw.mcs.anl.gov> <48CFCF83.8060405@ewi.tudelft.nl> Message-ID: <70F74A8B-CD3F-4C3C-AFC5-58F10D933F98@mcs.anl.gov> You should use rcm+icc if you want to keep a symmetric preconditioner. Depending on your matrix you might want to use KSPCR or KSPMINRES or even KSPSYMMLQ instead of bicgstab? We don't have a drop tolerance ICC and I do not recommend our drop tolerance ILU. Barry On Sep 16, 2008, at 10:23 AM, zhifeng sheng wrote: > Dear all > > I used the reordering scheme in petsc, and I would like to know > whether they are symmetric or not. > > I have a symmetric matrix (for 3D system) and I tried to solve it > with petsc and with matlab. > > In petsc I used rcm + ILU + bicgstab. while in matlab I used symrcm > +ICC+bicgstab. > > it seems that with symmetric reordering, the factorization is > somewhat faster... So I am wondering if it is possible to apply a > symmetric reordering? > > PS: in Matlab, an incomplete factorization can be done with a > tolerance (e.g. ICC(1e-4) ), can I do something like that with Petsc? > > Thanks a lot > Best regards > Zhifeng Sheng > > From z.sheng at ewi.tudelft.nl Tue Sep 16 10:57:08 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Tue, 16 Sep 2008 17:57:08 +0200 Subject: symmetric reordering and incomplete factorization with tolerance? In-Reply-To: <70F74A8B-CD3F-4C3C-AFC5-58F10D933F98@mcs.anl.gov> References: <20080916122643.4A36E348009@mailgw.mcs.anl.gov> <48CFCF83.8060405@ewi.tudelft.nl> <70F74A8B-CD3F-4C3C-AFC5-58F10D933F98@mcs.anl.gov> Message-ID: <48CFD754.4030606@ewi.tudelft.nl> Dear Barry I tried the RCM+ICC(2) in Petsc.... it seems that the reordering does not work well... as I view the preconditioner, it does not say in which way the matrix is reordered. And when I solve it with KSP, the solver always complains about Detected zero pivot in Cholesky factorization How could it be? Thanks Best regards Zhifeng Barry Smith wrote: > > You should use rcm+icc if you want to keep a symmetric preconditioner. > > Depending on your matrix you might want to use KSPCR or KSPMINRES > or even KSPSYMMLQ > instead of bicgstab? > > We don't have a drop tolerance ICC and I do not recommend our drop > tolerance ILU. > > Barry > > > On Sep 16, 2008, at 10:23 AM, zhifeng sheng wrote: > >> Dear all >> >> I used the reordering scheme in petsc, and I would like to know >> whether they are symmetric or not. >> >> I have a symmetric matrix (for 3D system) and I tried to solve it >> with petsc and with matlab. >> >> In petsc I used rcm + ILU + bicgstab. while in matlab I used >> symrcm+ICC+bicgstab. >> >> it seems that with symmetric reordering, the factorization is >> somewhat faster... So I am wondering if it is possible to apply a >> symmetric reordering? >> >> PS: in Matlab, an incomplete factorization can be done with a >> tolerance (e.g. ICC(1e-4) ), can I do something like that with Petsc? >> >> Thanks a lot >> Best regards >> Zhifeng Sheng >> >> > From bsmith at mcs.anl.gov Tue Sep 16 11:06:07 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Sep 2008 11:06:07 -0500 Subject: symmetric reordering and incomplete factorization with tolerance? In-Reply-To: <48CFD754.4030606@ewi.tudelft.nl> References: <20080916122643.4A36E348009@mailgw.mcs.anl.gov> <48CFCF83.8060405@ewi.tudelft.nl> <70F74A8B-CD3F-4C3C-AFC5-58F10D933F98@mcs.anl.gov> <48CFD754.4030606@ewi.tudelft.nl> Message-ID: <57046B5B-47CB-4A7B-B226-D495692D1262@mcs.anl.gov> On Sep 16, 2008, at 10:57 AM, zhifeng sheng wrote: > Dear Barry > > I tried the RCM+ICC(2) in Petsc.... it seems that the reordering > does not work well... as I view the preconditioner, it does not say > in which way the matrix is reordered. I've added this information to PCView_ICC. > > > And when I solve it with KSP, the solver always complains about > > Detected zero pivot in Cholesky factorization > > How could it be? This is very possible, there is nothing that says you take some symmetric matrix and do an ICC on it that you will get a positive definite matrix or no zero pivots. You can use -pc_factor_shift_positive_definite to force the ICC to generate a positive definite matrix. Barry > > > Thanks > Best regards > Zhifeng > > Barry Smith wrote: >> >> You should use rcm+icc if you want to keep a symmetric >> preconditioner. >> >> Depending on your matrix you might want to use KSPCR or KSPMINRES >> or even KSPSYMMLQ >> instead of bicgstab? >> >> We don't have a drop tolerance ICC and I do not recommend our >> drop tolerance ILU. >> >> Barry >> >> >> On Sep 16, 2008, at 10:23 AM, zhifeng sheng wrote: >> >>> Dear all >>> >>> I used the reordering scheme in petsc, and I would like to know >>> whether they are symmetric or not. >>> >>> I have a symmetric matrix (for 3D system) and I tried to solve it >>> with petsc and with matlab. >>> >>> In petsc I used rcm + ILU + bicgstab. while in matlab I used symrcm >>> +ICC+bicgstab. >>> >>> it seems that with symmetric reordering, the factorization is >>> somewhat faster... So I am wondering if it is possible to apply a >>> symmetric reordering? >>> >>> PS: in Matlab, an incomplete factorization can be done with a >>> tolerance (e.g. ICC(1e-4) ), can I do something like that with >>> Petsc? >>> >>> Thanks a lot >>> Best regards >>> Zhifeng Sheng >>> >>> >> > From hakan.jakobsson at math.umu.se Tue Sep 16 10:10:50 2008 From: hakan.jakobsson at math.umu.se (=?ISO-8859-1?Q?H=E5kan_Jakobsson?=) Date: Tue, 16 Sep 2008 17:10:50 +0200 Subject: inconsistent singular systems Message-ID: <087A8CBA-2F97-4880-8F30-32D66AF125AE@math.umu.se> Hi, How does PETSc handle the solution of a singular system Ax=b which also happens to be inconsistent? More specifically: is b orthogonalized against Null(A) internally by PETSc? Thanks H?kan Jakobsson From z.sheng at ewi.tudelft.nl Tue Sep 16 11:32:33 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Tue, 16 Sep 2008 18:32:33 +0200 Subject: symmetric reordering and incomplete factorization with tolerance? In-Reply-To: <57046B5B-47CB-4A7B-B226-D495692D1262@mcs.anl.gov> References: <20080916122643.4A36E348009@mailgw.mcs.anl.gov> <48CFCF83.8060405@ewi.tudelft.nl> <70F74A8B-CD3F-4C3C-AFC5-58F10D933F98@mcs.anl.gov> <48CFD754.4030606@ewi.tudelft.nl> <57046B5B-47CB-4A7B-B226-D495692D1262@mcs.anl.gov> Message-ID: <48CFDFA1.6090802@ewi.tudelft.nl> Hi Barry Where can I find and use PCView_ICC ? thanks Zhifeng Barry Smith wrote: > > On Sep 16, 2008, at 10:57 AM, zhifeng sheng wrote: > >> Dear Barry >> >> I tried the RCM+ICC(2) in Petsc.... it seems that the reordering does >> not work well... as I view the preconditioner, it does not say in >> which way the matrix is reordered. > > I've added this information to PCView_ICC. > >> >> >> And when I solve it with KSP, the solver always complains about >> >> Detected zero pivot in Cholesky factorization >> >> How could it be? > > This is very possible, there is nothing that says you take some > symmetric matrix and do an ICC on it that you will get a positive > definite matrix or > no zero pivots. > > You can use -pc_factor_shift_positive_definite to force the ICC to > generate a positive definite matrix. > > Barry > >> >> >> Thanks >> Best regards >> Zhifeng >> >> Barry Smith wrote: >>> >>> You should use rcm+icc if you want to keep a symmetric >>> preconditioner. >>> >>> Depending on your matrix you might want to use KSPCR or KSPMINRES >>> or even KSPSYMMLQ >>> instead of bicgstab? >>> >>> We don't have a drop tolerance ICC and I do not recommend our drop >>> tolerance ILU. >>> >>> Barry >>> >>> >>> On Sep 16, 2008, at 10:23 AM, zhifeng sheng wrote: >>> >>>> Dear all >>>> >>>> I used the reordering scheme in petsc, and I would like to know >>>> whether they are symmetric or not. >>>> >>>> I have a symmetric matrix (for 3D system) and I tried to solve it >>>> with petsc and with matlab. >>>> >>>> In petsc I used rcm + ILU + bicgstab. while in matlab I used >>>> symrcm+ICC+bicgstab. >>>> >>>> it seems that with symmetric reordering, the factorization is >>>> somewhat faster... So I am wondering if it is possible to apply a >>>> symmetric reordering? >>>> >>>> PS: in Matlab, an incomplete factorization can be done with a >>>> tolerance (e.g. ICC(1e-4) ), can I do something like that with Petsc? >>>> >>>> Thanks a lot >>>> Best regards >>>> Zhifeng Sheng >>>> >>>> >>> >> > From bsmith at mcs.anl.gov Tue Sep 16 11:35:39 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Sep 2008 11:35:39 -0500 Subject: inconsistent singular systems In-Reply-To: <087A8CBA-2F97-4880-8F30-32D66AF125AE@math.umu.se> References: <087A8CBA-2F97-4880-8F30-32D66AF125AE@math.umu.se> Message-ID: On Sep 16, 2008, at 10:10 AM, H?kan Jakobsson wrote: > Hi, > > How does PETSc handle the solution of a singular system Ax=b which > also happens to be inconsistent? More specifically: is b > orthogonalized against Null(A) internally by PETSc? No, you are expected to do this before calling the solver on the right hand side. Note: if A is not symmetric then I think you want to apply the Null(A') MatNullSpaceCreate() and KSPSetNullSpace() are a way you can tell the iterative solver the null space, otherwise most iterative solvers will create a giant solution in the null space direction and not converge. Barry > > > Thanks > > H?kan Jakobsson > From bsmith at mcs.anl.gov Tue Sep 16 11:43:12 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Sep 2008 11:43:12 -0500 Subject: symmetric reordering and incomplete factorization with tolerance? In-Reply-To: <48CFDFA1.6090802@ewi.tudelft.nl> References: <20080916122643.4A36E348009@mailgw.mcs.anl.gov> <48CFCF83.8060405@ewi.tudelft.nl> <70F74A8B-CD3F-4C3C-AFC5-58F10D933F98@mcs.anl.gov> <48CFD754.4030606@ewi.tudelft.nl> <57046B5B-47CB-4A7B-B226-D495692D1262@mcs.anl.gov> <48CFDFA1.6090802@ewi.tudelft.nl> Message-ID: If you run with -ksp_view it calls PCView_ICC internally, or you can call PCView(). You never call PCView_ICC() directly. Barry On Sep 16, 2008, at 11:32 AM, zhifeng sheng wrote: > Hi Barry > > Where can I find and use PCView_ICC ? > > thanks > Zhifeng > > Barry Smith wrote: >> >> On Sep 16, 2008, at 10:57 AM, zhifeng sheng wrote: >> >>> Dear Barry >>> >>> I tried the RCM+ICC(2) in Petsc.... it seems that the reordering >>> does not work well... as I view the preconditioner, it does not >>> say in which way the matrix is reordered. >> >> I've added this information to PCView_ICC. >> >>> >>> >>> And when I solve it with KSP, the solver always complains about >>> >>> Detected zero pivot in Cholesky factorization >>> >>> How could it be? >> >> This is very possible, there is nothing that says you take some >> symmetric matrix and do an ICC on it that you will get a positive >> definite matrix or >> no zero pivots. >> >> You can use -pc_factor_shift_positive_definite to force the ICC >> to generate a positive definite matrix. >> >> Barry >> >>> >>> >>> Thanks >>> Best regards >>> Zhifeng >>> >>> Barry Smith wrote: >>>> >>>> You should use rcm+icc if you want to keep a symmetric >>>> preconditioner. >>>> >>>> Depending on your matrix you might want to use KSPCR or >>>> KSPMINRES or even KSPSYMMLQ >>>> instead of bicgstab? >>>> >>>> We don't have a drop tolerance ICC and I do not recommend our >>>> drop tolerance ILU. >>>> >>>> Barry >>>> >>>> >>>> On Sep 16, 2008, at 10:23 AM, zhifeng sheng wrote: >>>> >>>>> Dear all >>>>> >>>>> I used the reordering scheme in petsc, and I would like to know >>>>> whether they are symmetric or not. >>>>> >>>>> I have a symmetric matrix (for 3D system) and I tried to solve >>>>> it with petsc and with matlab. >>>>> >>>>> In petsc I used rcm + ILU + bicgstab. while in matlab I used >>>>> symrcm+ICC+bicgstab. >>>>> >>>>> it seems that with symmetric reordering, the factorization is >>>>> somewhat faster... So I am wondering if it is possible to apply >>>>> a symmetric reordering? >>>>> >>>>> PS: in Matlab, an incomplete factorization can be done with a >>>>> tolerance (e.g. ICC(1e-4) ), can I do something like that with >>>>> Petsc? >>>>> >>>>> Thanks a lot >>>>> Best regards >>>>> Zhifeng Sheng >>>>> >>>>> >>>> >>> >> > From z.sheng at ewi.tudelft.nl Tue Sep 16 11:56:17 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Tue, 16 Sep 2008 18:56:17 +0200 Subject: inconsistent singular systems In-Reply-To: References: <087A8CBA-2F97-4880-8F30-32D66AF125AE@math.umu.se> Message-ID: <48CFE531.9080405@ewi.tudelft.nl> Hi, how is the nullspace be implemented in the solver? by using Lagrange multiplier? best regards Zhifeng Barry Smith wrote: > > On Sep 16, 2008, at 10:10 AM, H?kan Jakobsson wrote: > >> Hi, >> >> How does PETSc handle the solution of a singular system Ax=b which >> also happens to be inconsistent? More specifically: is b >> orthogonalized against Null(A) internally by PETSc? > > No, you are expected to do this before calling the solver on the > right hand side. Note: if A is not symmetric then I think you want to > apply the Null(A') > > MatNullSpaceCreate() and KSPSetNullSpace() are a way you can tell > the iterative solver the null space, otherwise most iterative solvers > will create a giant > solution in the null space direction and not converge. > > Barry > >> >> >> Thanks >> >> H?kan Jakobsson >> > From knepley at gmail.com Tue Sep 16 12:38:51 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 16 Sep 2008 12:38:51 -0500 Subject: inconsistent singular systems In-Reply-To: <48CFE531.9080405@ewi.tudelft.nl> References: <087A8CBA-2F97-4880-8F30-32D66AF125AE@math.umu.se> <48CFE531.9080405@ewi.tudelft.nl> Message-ID: On Tue, Sep 16, 2008 at 11:56 AM, zhifeng sheng wrote: > Hi, > > how is the nullspace be implemented in the solver? by using Lagrange > multiplier? You have to have an explicit basis for the null space in order for PETSc to use it. Matt > best regards > Zhifeng > > Barry Smith wrote: >> >> On Sep 16, 2008, at 10:10 AM, H?kan Jakobsson wrote: >> >>> Hi, >>> >>> How does PETSc handle the solution of a singular system Ax=b which also >>> happens to be inconsistent? More specifically: is b orthogonalized against >>> Null(A) internally by PETSc? >> >> No, you are expected to do this before calling the solver on the right >> hand side. Note: if A is not symmetric then I think you want to apply the >> Null(A') >> >> MatNullSpaceCreate() and KSPSetNullSpace() are a way you can tell the >> iterative solver the null space, otherwise most iterative solvers will >> create a giant >> solution in the null space direction and not converge. >> >> Barry >> >>> >>> >>> Thanks >>> >>> H?kan Jakobsson >>> >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From hakan.jakobsson at math.umu.se Tue Sep 16 12:48:31 2008 From: hakan.jakobsson at math.umu.se (=?ISO-8859-1?Q?H=E5kan_Jakobsson?=) Date: Tue, 16 Sep 2008 19:48:31 +0200 Subject: inconsistent singular systems In-Reply-To: References: <087A8CBA-2F97-4880-8F30-32D66AF125AE@math.umu.se> Message-ID: Thanks for the quick response. I've tried both orthogonalizing and not, having convergence in both cases and very similar solutions. Likely since b has been almost orthogonal to Null(A) already. /H?kan On Sep 16, 2008, at 6:35 PM, Barry Smith wrote: > > On Sep 16, 2008, at 10:10 AM, H?kan Jakobsson wrote: > >> Hi, >> >> How does PETSc handle the solution of a singular system Ax=b which >> also happens to be inconsistent? More specifically: is b >> orthogonalized against Null(A) internally by PETSc? > > No, you are expected to do this before calling the solver on the > right hand side. Note: if A is not symmetric then I think you want > to apply the Null(A') > > MatNullSpaceCreate() and KSPSetNullSpace() are a way you can tell > the iterative solver the null space, otherwise most iterative > solvers will create a giant > solution in the null space direction and not converge. > > Barry > >> >> >> Thanks >> >> H?kan Jakobsson >> > From sdettrick at gmail.com Tue Sep 16 12:58:20 2008 From: sdettrick at gmail.com (Sean Dettrick) Date: Tue, 16 Sep 2008 10:58:20 -0700 Subject: antisymmetric jacobian for SNES? Message-ID: <6398E0D0-B66B-4A22-9AD3-FDA8D7C31480@gmail.com> Hello, I am looking at finite differencing a non-linear problem which has the square of a gradient in it. It appears to lead to an antisymmetric Jacobian matrix. Can anyone comment on whether SNES can handle an antisymmetric Jacobian? If not, then could I just use snes_fd ? Thanks, Sean From knepley at gmail.com Tue Sep 16 13:27:19 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 16 Sep 2008 13:27:19 -0500 Subject: antisymmetric jacobian for SNES? In-Reply-To: <6398E0D0-B66B-4A22-9AD3-FDA8D7C31480@gmail.com> References: <6398E0D0-B66B-4A22-9AD3-FDA8D7C31480@gmail.com> Message-ID: On Tue, Sep 16, 2008 at 12:58 PM, Sean Dettrick wrote: > Hello, > > I am looking at finite differencing a non-linear problem which has the > square of a gradient in it. It appears to lead to an antisymmetric Jacobian > matrix. Can anyone comment on whether SNES can handle an antisymmetric > Jacobian? If not, then could I just use snes_fd ? There are no inherent symmetry limitations. Matt > Thanks, > Sean -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From cjchen at math.msu.edu Tue Sep 16 14:34:29 2008 From: cjchen at math.msu.edu (Chen, Changjun) Date: Tue, 16 Sep 2008 15:34:29 -0400 Subject: Compile PETSc with AMBER Message-ID: Dear Sir, I am Changjun Chen in Michigan State University, as a postdoc in Prof. Guowei's group in Mathematics department. Today I try to compile PETSc with AMBER (Molecular Simulation Package), but failed. When I add one line into Makefile of AMBER, like the following include ${PETSC_DIR}/bmake/common/base I got the error message: ../config.h:82: warning: overriding commands for target `.f.o' /home/cjchen/soft/petsc/bmake/linux-gnu-c-debug/petscrules:18: warning: ignoring old commands for target `.f.o' ../config.h:86: warning: overriding commands for target `.c.o' /home/cjchen/soft/petsc/bmake/common/rules:269: warning: ignoring old commands for target `.c.o' Makefile:275: warning: overriding commands for target `clean' /home/cjchen/soft/petsc/bmake/common/rules:95: warning: ignoring old commands for target `clean' It seems some compile rules in PETSc are repeated in AMBER, now no code could be compiled, even for those belong to AMBER themselves. Could you kindly give me some suggestion? Thank you! Sincerely, Changjun From balay at mcs.anl.gov Tue Sep 16 14:37:46 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 16 Sep 2008 14:37:46 -0500 (CDT) Subject: Compile PETSc with AMBER In-Reply-To: References: Message-ID: You can try using the following to avoid duplicate targets in PETSc and AMBER. The previous AMBER targets should continue to work after this change. include ${PETSC_DIR}/bmake/common/variables Satish On Tue, 16 Sep 2008, Chen, Changjun wrote: > Dear Sir, > I am Changjun Chen in Michigan State University, as a postdoc in Prof. Guowei's group in Mathematics department. > Today I try to compile PETSc with AMBER (Molecular Simulation Package), but failed. When I add one line into Makefile of AMBER, like the following > > include ${PETSC_DIR}/bmake/common/base > > I got the error message: > > ../config.h:82: warning: overriding commands for target `.f.o' > /home/cjchen/soft/petsc/bmake/linux-gnu-c-debug/petscrules:18: warning: ignoring old commands for target `.f.o' > ../config.h:86: warning: overriding commands for target `.c.o' > /home/cjchen/soft/petsc/bmake/common/rules:269: warning: ignoring old commands for target `.c.o' > Makefile:275: warning: overriding commands for target `clean' > /home/cjchen/soft/petsc/bmake/common/rules:95: warning: ignoring old commands for target `clean' > > It seems some compile rules in PETSc are repeated in AMBER, now no code could be compiled, even for those belong to AMBER themselves. > Could you kindly give me some suggestion? > Thank you! > > Sincerely, > Changjun > > From hzhang at mcs.anl.gov Tue Sep 16 22:06:27 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 16 Sep 2008 22:06:27 -0500 (CDT) Subject: About SNES solver of TS objects In-Reply-To: References: Message-ID: TSGetSNES() must be called after TSSetType(), otherwise the nonlinear context 'snes' is being created. Attached please find updated ex7.c, which illustrates how to set uer-defined SNESMonitor(). This example is pushed to petsc-dev. I also added an error flag in petsc-dev to prevent calling of TSGetSNES() before TSSetType(). Thanks for reporting the problem! Best, Hong On Fri, 12 Sep 2008, tsjb00 wrote: > > Many thanks for your reply! The code I tried was very similar to the code in: > ts/examples/tutorials/ex7.c > except that I changed the r.h.s. function. > > The code related to TS: > ierr = TSCreate(PETSC_COMM_WORLD,&ts);CHKERRQ(ierr); > ierr = TSSetProblemType(ts,TS_NONLINEAR);CHKERRQ(ierr); > ierr = TSSetRHSFunction(ts,FormFunction,&appctx);CHKERRQ(ierr); > > ierr = DAGetColoring(appctx.da,IS_COLORING_GLOBAL,&iscoloring);CHKERRQ(ierr); > ierr = MatFDColoringCreate(J,iscoloring,&matfdcoloring);CHKERRQ(ierr); > ierr = ISColoringDestroy(iscoloring);CHKERRQ(ierr); > ierr = MatFDColoringSetFunction(matfdcoloring,(PetscErrorCode (*)(void))FormFunction,&appctx);CHKERRQ(ierr); > ierr = MatFDColoringSetFromOptions(matfdcoloring);CHKERRQ(ierr); > ierr = TSSetRHSJacobian(ts,J,J,TSDefaultComputeJacobianColor,matfdcoloring);CHKERRQ(ierr); > > dt = appctx.dtmin; > ierr = TSSetInitialTimeStep(ts,0.0,dt);CHKERRQ(ierr); > ierr = TSSetType(ts,TS_BEULER);CHKERRQ(ierr); > > Then I tried to add the snes monitor: > PetscViewerASCIIOpen(PETSC_COMM_WORLD,"snes.log",&rviewer); > ierr = TSGetSNES(ts,&ts_snes); > ierr = SNESMonitorSet(ts_snes,SNESMonitorDefault,&rviewer,PETSC_NULL); > I got the snes.log with long lines which started with countless blank/space and ended at the very end with the SNES iteration info. As a result, the file took a lot of memory space even with one line of outputs. > > Please let me know if I did something wrong. > > Have a nice weekend! > > > _________________________________________________________________ > ????????MSN?????????????????????????????????????????????????? > http://mobile.msn.com.cn/ > > -------------- next part -------------- A non-text attachment was scrubbed... Name: ex7.c Type: text/x-csrc Size: 11109 bytes Desc: ex7.c URL: From tsjb00 at hotmail.com Wed Sep 17 10:31:08 2008 From: tsjb00 at hotmail.com (tsjb00) Date: Wed, 17 Sep 2008 15:31:08 +0000 Subject: About SNES solver of TS objects Message-ID: Many thanks for your help! Best wishes for you! Bei _________________________________________________________________ ?????????????????????????? http://im.live.cn/Share/18.htm From Amit.Itagi at seagate.com Wed Sep 17 09:59:00 2008 From: Amit.Itagi at seagate.com (Amit.Itagi at seagate.com) Date: Wed, 17 Sep 2008 10:59:00 -0400 Subject: DA output question Message-ID: Hi, I have a 3D finite difference grid modeled using a DA. I use VecView to write out the entire grid data to a file. What is the best way to just write out a 2D slice of the 3D data ? Thanks Rgds, Amit From bsmith at mcs.anl.gov Wed Sep 17 12:46:56 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 17 Sep 2008 12:46:56 -0500 Subject: DA output question In-Reply-To: References: Message-ID: <14B25B34-1698-4910-BE23-7CC9649D2B64@mcs.anl.gov> There are not much in the way of tools to do this. The function DAGetProcessorSubset() might be useful as a starting point. Barry On Sep 17, 2008, at 9:59 AM, Amit.Itagi at seagate.com wrote: > > Hi, > > I have a 3D finite difference grid modeled using a DA. I use VecView > to > write out the entire grid data to a file. What is the best way to just > write out a 2D slice of the 3D data ? > > Thanks > > Rgds, > Amit > From recrusader at gmail.com Wed Sep 17 17:59:55 2008 From: recrusader at gmail.com (Yujie) Date: Wed, 17 Sep 2008 15:59:55 -0700 Subject: Petsc and Slepc with multiple process groups Message-ID: <7ff0ee010809171559i4771680bq228ed04d7f827fd3@mail.gmail.com> Hi, Petsc Developer: Currently, I am using Slepc for my application. It is based on Petsc. Assuming I have a cluster with N nodes. My codes are like main() { step 1: Initialize Petsc and Slepc; step 2: Use Petsc; (use all N nodes in one process group) step 3: Use Slepc; (N nodes is divided into M process groups. these groups are indepedent. However, they need to communicate with each other) step 4: Use Petsc; (use all N nodes in one process group) } My method is: when using Slepc, MPI_Comm_split() is used to divide N nodes into M process groups which means to generate M communication domains. Then, MPI_Intercomm_create() creates inter-group communication domain to process the communication between different M process groups. I don't know whether this method is ok regarding Petsc and Slepc. Because Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is initialized with all N nodes in a communication domain. Petsc in Step 2 uses this communication domain. However, in Step 3, I need to divide all N nodes and generate M communication domains. I don't know how Petsc and Slepc can process this change? If the method doesn't work, could you give me some advice? thanks a lot. Regards, Yujie -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Wed Sep 17 18:03:57 2008 From: rlmackie862 at gmail.com (Randall Mackie) Date: Wed, 17 Sep 2008 16:03:57 -0700 Subject: DA output question In-Reply-To: <14B25B34-1698-4910-BE23-7CC9649D2B64@mcs.anl.gov> References: <14B25B34-1698-4910-BE23-7CC9649D2B64@mcs.anl.gov> Message-ID: <48D18CDD.5040303@gmail.com> I simply scatter the DA vector to a natural vector on the 0th processor and then output the grid, slice by slice, for example. Randy Barry Smith wrote: > > There are not much in the way of tools to do this. The function > DAGetProcessorSubset() might > be useful as a starting point. > > Barry > > On Sep 17, 2008, at 9:59 AM, Amit.Itagi at seagate.com wrote: > >> >> Hi, >> >> I have a 3D finite difference grid modeled using a DA. I use VecView to >> write out the entire grid data to a file. What is the best way to just >> write out a 2D slice of the 3D data ? >> >> Thanks >> >> Rgds, >> Amit >> > From dalcinl at gmail.com Wed Sep 17 18:08:32 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Wed, 17 Sep 2008 20:08:32 -0300 Subject: Petsc and Slepc with multiple process groups In-Reply-To: <7ff0ee010809171559i4771680bq228ed04d7f827fd3@mail.gmail.com> References: <7ff0ee010809171559i4771680bq228ed04d7f827fd3@mail.gmail.com> Message-ID: I bet you have not even tried to actually implent and run this :-). This should work. If not, I would consider that a bug. Let us know of any problem you have. On Wed, Sep 17, 2008 at 7:59 PM, Yujie wrote: > Hi, Petsc Developer: > > Currently, I am using Slepc for my application. It is based on Petsc. > > Assuming I have a cluster with N nodes. > > My codes are like > > main() > > { > > step 1: Initialize Petsc and Slepc; > > step 2: Use Petsc; (use all N nodes in one process group) > > step 3: Use Slepc; (N nodes is divided into M process groups. these groups > are indepedent. However, they need to communicate with each other) > > step 4: Use Petsc; (use all N nodes in one process group) > > } > > My method is: > > when using Slepc, MPI_Comm_split() is used to divide N nodes into M process > groups which means to generate M communication domains. Then, > MPI_Intercomm_create() creates inter-group communication domain to process > the communication between different M process groups. > > I don't know whether this method is ok regarding Petsc and Slepc. Because > Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is initialized > with all N nodes in a communication domain. Petsc in Step 2 uses this > communication domain. However, in Step 3, I need to divide all N nodes and > generate M communication domains. I don't know how Petsc and Slepc can > process this change? If the method doesn't work, could you give me some > advice? thanks a lot. > > Regards, > > Yujie -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From recrusader at gmail.com Wed Sep 17 18:25:41 2008 From: recrusader at gmail.com (Yujie) Date: Wed, 17 Sep 2008 16:25:41 -0700 Subject: Petsc and Slepc with multiple process groups In-Reply-To: References: <7ff0ee010809171559i4771680bq228ed04d7f827fd3@mail.gmail.com> Message-ID: <7ff0ee010809171625o235da9d7ha6ca6013314a862e@mail.gmail.com> You are right :). I am thinking the whole framwork for my codes. thank you, Lisandro. In Step 3, there are different M slepc-based process groups, which should mean M communication domain for Petsc and Slepc (I have created a communication domain for them) is it ok? thanks again. Regards, Yujie On Wed, Sep 17, 2008 at 4:08 PM, Lisandro Dalcin wrote: > I bet you have not even tried to actually implent and run this :-). > > This should work. If not, I would consider that a bug. Let us know of > any problem you have. > > > On Wed, Sep 17, 2008 at 7:59 PM, Yujie wrote: > > Hi, Petsc Developer: > > > > Currently, I am using Slepc for my application. It is based on Petsc. > > > > Assuming I have a cluster with N nodes. > > > > My codes are like > > > > main() > > > > { > > > > step 1: Initialize Petsc and Slepc; > > > > step 2: Use Petsc; (use all N nodes in one process group) > > > > step 3: Use Slepc; (N nodes is divided into M process groups. these > groups > > are indepedent. However, they need to communicate with each other) > > > > step 4: Use Petsc; (use all N nodes in one process group) > > > > } > > > > My method is: > > > > when using Slepc, MPI_Comm_split() is used to divide N nodes into M > process > > groups which means to generate M communication domains. Then, > > MPI_Intercomm_create() creates inter-group communication domain to > process > > the communication between different M process groups. > > > > I don't know whether this method is ok regarding Petsc and Slepc. Because > > Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is > initialized > > with all N nodes in a communication domain. Petsc in Step 2 uses this > > communication domain. However, in Step 3, I need to divide all N nodes > and > > generate M communication domains. I don't know how Petsc and Slepc can > > process this change? If the method doesn't work, could you give me some > > advice? thanks a lot. > > > > Regards, > > > > Yujie > > > > -- > Lisandro Dalc?n > --------------- > Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > PTLC - G?emes 3450, (3000) Santa Fe, Argentina > Tel/Fax: +54-(0)342-451.1594 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Wed Sep 17 19:05:31 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Wed, 17 Sep 2008 21:05:31 -0300 Subject: Petsc and Slepc with multiple process groups In-Reply-To: <7ff0ee010809171625o235da9d7ha6ca6013314a862e@mail.gmail.com> References: <7ff0ee010809171559i4771680bq228ed04d7f827fd3@mail.gmail.com> <7ff0ee010809171625o235da9d7ha6ca6013314a862e@mail.gmail.com> Message-ID: A long as you create your SLEPc objects with the appropriate communicator (ie. the one obtained with MPI_Comm_split), then all should just work. Of course, you will have to make appropriate MPI calls to 'transfer' data from your N group to the many M groups, and the other way to collect results. On Wed, Sep 17, 2008 at 8:25 PM, Yujie wrote: > You are right :). I am thinking the whole framwork for my codes. thank you, > Lisandro. In Step 3, there are different M slepc-based process groups, which > should mean M communication domain for Petsc and Slepc (I have created a > communication domain for them) is it ok? thanks again. > > Regards, > > Yujie > > On Wed, Sep 17, 2008 at 4:08 PM, Lisandro Dalcin wrote: >> >> I bet you have not even tried to actually implent and run this :-). >> >> This should work. If not, I would consider that a bug. Let us know of >> any problem you have. >> >> >> On Wed, Sep 17, 2008 at 7:59 PM, Yujie wrote: >> > Hi, Petsc Developer: >> > >> > Currently, I am using Slepc for my application. It is based on Petsc. >> > >> > Assuming I have a cluster with N nodes. >> > >> > My codes are like >> > >> > main() >> > >> > { >> > >> > step 1: Initialize Petsc and Slepc; >> > >> > step 2: Use Petsc; (use all N nodes in one process group) >> > >> > step 3: Use Slepc; (N nodes is divided into M process groups. these >> > groups >> > are indepedent. However, they need to communicate with each other) >> > >> > step 4: Use Petsc; (use all N nodes in one process group) >> > >> > } >> > >> > My method is: >> > >> > when using Slepc, MPI_Comm_split() is used to divide N nodes into M >> > process >> > groups which means to generate M communication domains. Then, >> > MPI_Intercomm_create() creates inter-group communication domain to >> > process >> > the communication between different M process groups. >> > >> > I don't know whether this method is ok regarding Petsc and Slepc. >> > Because >> > Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is >> > initialized >> > with all N nodes in a communication domain. Petsc in Step 2 uses this >> > communication domain. However, in Step 3, I need to divide all N nodes >> > and >> > generate M communication domains. I don't know how Petsc and Slepc can >> > process this change? If the method doesn't work, could you give me some >> > advice? thanks a lot. >> > >> > Regards, >> > >> > Yujie >> >> >> >> -- >> Lisandro Dalc?n >> --------------- >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >> Tel/Fax: +54-(0)342-451.1594 >> > > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From recrusader at gmail.com Wed Sep 17 19:22:17 2008 From: recrusader at gmail.com (Yujie) Date: Wed, 17 Sep 2008 17:22:17 -0700 Subject: Petsc and Slepc with multiple process groups In-Reply-To: References: <7ff0ee010809171559i4771680bq228ed04d7f827fd3@mail.gmail.com> <7ff0ee010809171625o235da9d7ha6ca6013314a862e@mail.gmail.com> Message-ID: <7ff0ee010809171722n74ba3830h7935014f7283eae5@mail.gmail.com> Thank you very much, Lisandro. You are right. It look like a little difficult to "transfer" data from one node to "N" nodes or from N nodes to M nodes. My method is to first send all the data in a node and to redistribute it in "N" or "M" nodes. do you have any idea about it? is it time-consuming? In Petsc, how to support such type of operations? thanks a lot. Regards, Yujie On Wed, Sep 17, 2008 at 5:05 PM, Lisandro Dalcin wrote: > A long as you create your SLEPc objects with the appropriate > communicator (ie. the one obtained with MPI_Comm_split), then all > should just work. Of course, you will have to make appropriate MPI > calls to 'transfer' data from your N group to the many M groups, and > the other way to collect results. > > > On Wed, Sep 17, 2008 at 8:25 PM, Yujie wrote: > > You are right :). I am thinking the whole framwork for my codes. thank > you, > > Lisandro. In Step 3, there are different M slepc-based process groups, > which > > should mean M communication domain for Petsc and Slepc (I have created a > > communication domain for them) is it ok? thanks again. > > > > Regards, > > > > Yujie > > > > On Wed, Sep 17, 2008 at 4:08 PM, Lisandro Dalcin > wrote: > >> > >> I bet you have not even tried to actually implent and run this :-). > >> > >> This should work. If not, I would consider that a bug. Let us know of > >> any problem you have. > >> > >> > >> On Wed, Sep 17, 2008 at 7:59 PM, Yujie wrote: > >> > Hi, Petsc Developer: > >> > > >> > Currently, I am using Slepc for my application. It is based on Petsc. > >> > > >> > Assuming I have a cluster with N nodes. > >> > > >> > My codes are like > >> > > >> > main() > >> > > >> > { > >> > > >> > step 1: Initialize Petsc and Slepc; > >> > > >> > step 2: Use Petsc; (use all N nodes in one process group) > >> > > >> > step 3: Use Slepc; (N nodes is divided into M process groups. these > >> > groups > >> > are indepedent. However, they need to communicate with each other) > >> > > >> > step 4: Use Petsc; (use all N nodes in one process group) > >> > > >> > } > >> > > >> > My method is: > >> > > >> > when using Slepc, MPI_Comm_split() is used to divide N nodes into M > >> > process > >> > groups which means to generate M communication domains. Then, > >> > MPI_Intercomm_create() creates inter-group communication domain to > >> > process > >> > the communication between different M process groups. > >> > > >> > I don't know whether this method is ok regarding Petsc and Slepc. > >> > Because > >> > Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is > >> > initialized > >> > with all N nodes in a communication domain. Petsc in Step 2 uses this > >> > communication domain. However, in Step 3, I need to divide all N nodes > >> > and > >> > generate M communication domains. I don't know how Petsc and Slepc can > >> > process this change? If the method doesn't work, could you give me > some > >> > advice? thanks a lot. > >> > > >> > Regards, > >> > > >> > Yujie > >> > >> > >> > >> -- > >> Lisandro Dalc?n > >> --------------- > >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina > >> Tel/Fax: +54-(0)342-451.1594 > >> > > > > > > > > -- > Lisandro Dalc?n > --------------- > Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > PTLC - G?emes 3450, (3000) Santa Fe, Argentina > Tel/Fax: +54-(0)342-451.1594 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lua.byhh at gmail.com Thu Sep 18 00:07:50 2008 From: lua.byhh at gmail.com (Shengyong) Date: Thu, 18 Sep 2008 13:07:50 +0800 Subject: OpenFVM In-Reply-To: <20080916122643.4A36E348009@mailgw.mcs.anl.gov> References: <20080916122643.4A36E348009@mailgw.mcs.anl.gov> Message-ID: Hi Haudin, I have viewed the source and found no PETSc implementation. Pang On Tue, Sep 16, 2008 at 8:26 PM, Jean-Marc Haudin wrote: > > > > OpenFVM is a general CFD solver released under the GPL license. It was > developed to simulate the flow in complex 3D geometries. > Therefore, the mesh can be unstructured and contain control volumes with > arbritrary shape. The code uses the finite volume method to > evaluate the partial differential equations. As well as solving the > velocity and pressure fields, the code is capable of solving > non-isothermal multiphase flow. > > The code has two implementations: serial and parallel. The serial version > uses LASPACK as the linear matrix solver and the parallel one > uses the PETSc library. Both implementations use the open source tool Gmsh > for pre- and -post processing. > > http://openfvm.sourceforge.net/ > > > > > > > > > ------ > Gesendet mit X-Mailer .... http://sourceforge.net/projects/x-mailer/ > > -- Pang Shengyong Solidification Simulation Lab, State Key Lab of Mould & Die Technology, Huazhong Univ. of Sci. & Tech. China -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Thu Sep 18 08:44:41 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Thu, 18 Sep 2008 10:44:41 -0300 Subject: Petsc and Slepc with multiple process groups In-Reply-To: <7ff0ee010809171722n74ba3830h7935014f7283eae5@mail.gmail.com> References: <7ff0ee010809171559i4771680bq228ed04d7f827fd3@mail.gmail.com> <7ff0ee010809171625o235da9d7ha6ca6013314a862e@mail.gmail.com> <7ff0ee010809171722n74ba3830h7935014f7283eae5@mail.gmail.com> Message-ID: On Wed, Sep 17, 2008 at 9:22 PM, Yujie wrote: > Thank you very much, Lisandro. You are right. It look like a little > difficult to "transfer" data from one node to "N" nodes or from N nodes to M > nodes. My method is to first send all the data in a node and to redistribute > it in "N" or "M" nodes. do you have any idea about it? is it time-consuming? > In Petsc, how to support such type of operations? thanks a lot. Mmm.. I believe there is not way to do that with PETSc. You just have to make MPI calls. Perhaps if you can give me a bit more of details about your communication patters, then I can give you a good suggestion. > > Regards, > > Yujie > > On Wed, Sep 17, 2008 at 5:05 PM, Lisandro Dalcin wrote: >> >> A long as you create your SLEPc objects with the appropriate >> communicator (ie. the one obtained with MPI_Comm_split), then all >> should just work. Of course, you will have to make appropriate MPI >> calls to 'transfer' data from your N group to the many M groups, and >> the other way to collect results. >> >> >> On Wed, Sep 17, 2008 at 8:25 PM, Yujie wrote: >> > You are right :). I am thinking the whole framwork for my codes. thank >> > you, >> > Lisandro. In Step 3, there are different M slepc-based process groups, >> > which >> > should mean M communication domain for Petsc and Slepc (I have created a >> > communication domain for them) is it ok? thanks again. >> > >> > Regards, >> > >> > Yujie >> > >> > On Wed, Sep 17, 2008 at 4:08 PM, Lisandro Dalcin >> > wrote: >> >> >> >> I bet you have not even tried to actually implent and run this :-). >> >> >> >> This should work. If not, I would consider that a bug. Let us know of >> >> any problem you have. >> >> >> >> >> >> On Wed, Sep 17, 2008 at 7:59 PM, Yujie wrote: >> >> > Hi, Petsc Developer: >> >> > >> >> > Currently, I am using Slepc for my application. It is based on Petsc. >> >> > >> >> > Assuming I have a cluster with N nodes. >> >> > >> >> > My codes are like >> >> > >> >> > main() >> >> > >> >> > { >> >> > >> >> > step 1: Initialize Petsc and Slepc; >> >> > >> >> > step 2: Use Petsc; (use all N nodes in one process group) >> >> > >> >> > step 3: Use Slepc; (N nodes is divided into M process groups. these >> >> > groups >> >> > are indepedent. However, they need to communicate with each other) >> >> > >> >> > step 4: Use Petsc; (use all N nodes in one process group) >> >> > >> >> > } >> >> > >> >> > My method is: >> >> > >> >> > when using Slepc, MPI_Comm_split() is used to divide N nodes into M >> >> > process >> >> > groups which means to generate M communication domains. Then, >> >> > MPI_Intercomm_create() creates inter-group communication domain to >> >> > process >> >> > the communication between different M process groups. >> >> > >> >> > I don't know whether this method is ok regarding Petsc and Slepc. >> >> > Because >> >> > Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is >> >> > initialized >> >> > with all N nodes in a communication domain. Petsc in Step 2 uses this >> >> > communication domain. However, in Step 3, I need to divide all N >> >> > nodes >> >> > and >> >> > generate M communication domains. I don't know how Petsc and Slepc >> >> > can >> >> > process this change? If the method doesn't work, could you give me >> >> > some >> >> > advice? thanks a lot. >> >> > >> >> > Regards, >> >> > >> >> > Yujie >> >> >> >> >> >> >> >> -- >> >> Lisandro Dalc?n >> >> --------------- >> >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >> >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >> >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >> >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >> >> Tel/Fax: +54-(0)342-451.1594 >> >> >> > >> > >> >> >> >> -- >> Lisandro Dalc?n >> --------------- >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >> Tel/Fax: +54-(0)342-451.1594 >> > > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From Amit.Itagi at seagate.com Thu Sep 18 08:29:49 2008 From: Amit.Itagi at seagate.com (Amit.Itagi at seagate.com) Date: Thu, 18 Sep 2008 09:29:49 -0400 Subject: DA output question In-Reply-To: <48D18CDD.5040303@gmail.com> Message-ID: Randy, In the general case, I may not have memory to scatter the entire gird to the oth processor. I guess, I will have to use the DA information to just scatter the relevent portions. Thanks Rgds, Amit Randall Mackie To Sent by: petsc-users at mcs.anl.gov owner-petsc-users cc @mcs.anl.gov No Phone Info Subject Available Re: DA output question 09/17/2008 07:03 PM Please respond to petsc-users at mcs.a nl.gov I simply scatter the DA vector to a natural vector on the 0th processor and then output the grid, slice by slice, for example. Randy Barry Smith wrote: > > There are not much in the way of tools to do this. The function > DAGetProcessorSubset() might > be useful as a starting point. > > Barry > > On Sep 17, 2008, at 9:59 AM, Amit.Itagi at seagate.com wrote: > >> >> Hi, >> >> I have a 3D finite difference grid modeled using a DA. I use VecView to >> write out the entire grid data to a file. What is the best way to just >> write out a 2D slice of the 3D data ? >> >> Thanks >> >> Rgds, >> Amit >> > From hzhang at mcs.anl.gov Thu Sep 18 09:19:04 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Thu, 18 Sep 2008 09:19:04 -0500 (CDT) Subject: Petsc and Slepc with multiple process groups In-Reply-To: References: <7ff0ee010809171559i4771680bq228ed04d7f827fd3@mail.gmail.com> <7ff0ee010809171625o235da9d7ha6ca6013314a862e@mail.gmail.com> <7ff0ee010809171722n74ba3830h7935014f7283eae5@mail.gmail.com> Message-ID: Yujie, See the data structure "PetscSubcomm" in ~petsc/include/petscsys.h An example of its implementation is PCREDUNDANT (see src/ksp/pc/impls/redundant/redundant.c), for which, we first split the parent communicator with N processors into n subcommunicator for parallel LU preconditioner, then scatter the solution from the subcommunicator back to the parent communicator. Note, the scatter used there is unique for our particular application. You likely need to implement your own scattering according to your need. Hong On Thu, 18 Sep 2008, Lisandro Dalcin wrote: > On Wed, Sep 17, 2008 at 9:22 PM, Yujie wrote: >> Thank you very much, Lisandro. You are right. It look like a little >> difficult to "transfer" data from one node to "N" nodes or from N nodes to M >> nodes. My method is to first send all the data in a node and to redistribute >> it in "N" or "M" nodes. do you have any idea about it? is it time-consuming? >> In Petsc, how to support such type of operations? thanks a lot. > > Mmm.. I believe there is not way to do that with PETSc. You just have > to make MPI calls. Perhaps if you can give me a bit more of details > about your communication patters, then I can give you a good > suggestion. > >> >> Regards, >> >> Yujie >> >> On Wed, Sep 17, 2008 at 5:05 PM, Lisandro Dalcin wrote: >>> >>> A long as you create your SLEPc objects with the appropriate >>> communicator (ie. the one obtained with MPI_Comm_split), then all >>> should just work. Of course, you will have to make appropriate MPI >>> calls to 'transfer' data from your N group to the many M groups, and >>> the other way to collect results. >>> >>> >>> On Wed, Sep 17, 2008 at 8:25 PM, Yujie wrote: >>>> You are right :). I am thinking the whole framwork for my codes. thank >>>> you, >>>> Lisandro. In Step 3, there are different M slepc-based process groups, >>>> which >>>> should mean M communication domain for Petsc and Slepc (I have created a >>>> communication domain for them) is it ok? thanks again. >>>> >>>> Regards, >>>> >>>> Yujie >>>> >>>> On Wed, Sep 17, 2008 at 4:08 PM, Lisandro Dalcin >>>> wrote: >>>>> >>>>> I bet you have not even tried to actually implent and run this :-). >>>>> >>>>> This should work. If not, I would consider that a bug. Let us know of >>>>> any problem you have. >>>>> >>>>> >>>>> On Wed, Sep 17, 2008 at 7:59 PM, Yujie wrote: >>>>>> Hi, Petsc Developer: >>>>>> >>>>>> Currently, I am using Slepc for my application. It is based on Petsc. >>>>>> >>>>>> Assuming I have a cluster with N nodes. >>>>>> >>>>>> My codes are like >>>>>> >>>>>> main() >>>>>> >>>>>> { >>>>>> >>>>>> step 1: Initialize Petsc and Slepc; >>>>>> >>>>>> step 2: Use Petsc; (use all N nodes in one process group) >>>>>> >>>>>> step 3: Use Slepc; (N nodes is divided into M process groups. these >>>>>> groups >>>>>> are indepedent. However, they need to communicate with each other) >>>>>> >>>>>> step 4: Use Petsc; (use all N nodes in one process group) >>>>>> >>>>>> } >>>>>> >>>>>> My method is: >>>>>> >>>>>> when using Slepc, MPI_Comm_split() is used to divide N nodes into M >>>>>> process >>>>>> groups which means to generate M communication domains. Then, >>>>>> MPI_Intercomm_create() creates inter-group communication domain to >>>>>> process >>>>>> the communication between different M process groups. >>>>>> >>>>>> I don't know whether this method is ok regarding Petsc and Slepc. >>>>>> Because >>>>>> Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is >>>>>> initialized >>>>>> with all N nodes in a communication domain. Petsc in Step 2 uses this >>>>>> communication domain. However, in Step 3, I need to divide all N >>>>>> nodes >>>>>> and >>>>>> generate M communication domains. I don't know how Petsc and Slepc >>>>>> can >>>>>> process this change? If the method doesn't work, could you give me >>>>>> some >>>>>> advice? thanks a lot. >>>>>> >>>>>> Regards, >>>>>> >>>>>> Yujie >>>>> >>>>> >>>>> >>>>> -- >>>>> Lisandro Dalc?n >>>>> --------------- >>>>> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >>>>> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >>>>> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >>>>> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >>>>> Tel/Fax: +54-(0)342-451.1594 >>>>> >>>> >>>> >>> >>> >>> >>> -- >>> Lisandro Dalc?n >>> --------------- >>> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >>> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >>> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >>> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >>> Tel/Fax: +54-(0)342-451.1594 >>> >> >> > > > > -- > Lisandro Dalc?n > --------------- > Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > PTLC - G?emes 3450, (3000) Santa Fe, Argentina > Tel/Fax: +54-(0)342-451.1594 > > From Hung.V.Nguyen at usace.army.mil Thu Sep 18 13:31:05 2008 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Thu, 18 Sep 2008 13:31:05 -0500 Subject: Petsc solver question Message-ID: All, I have a test code that reads matrix in CSR format, rhs and provided solution. I run this code on Cray XT3 system with 1, 2, 4, 8 and 16 cores. The timing spent on KSP are 53.50, 213.69, 110.43, 66.90, and 35.60 secs for 1, 2, 4, 8, and 16 cores; respectively. Why is a running time on 1 core smaller than others (2, 4, 8 cores)? Does re-ordering of matrix help? Did I do something wrong in the code? Thanks for your help. Note: the matrix, rhs, provided and computed solution are written to matlab files via petsc functions. It seems to me that we got the right solution (using matlab to verify). Then, Here is some info about matrix A: hvnguyen:sapphire01% head matrix.m % Size = 59409 59409 % Nonzeros = 1113875 zzz = zeros(1113875,3); >> cond (Mat_0) Warning: Using CONDEST instead of COND for sparse matrix. > In cond at 28 ans = 3.9310e+09 -- 1 pes: hvnguyen:sapphire09% yod -np 1 ./test_matrix -ksp_type cg -pc_type bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 yod: -sz is 1, so -proc is reset from VN to 0 reading nlrn ... reading lrn ... reading cmatrix and rld ... Number of iterations and Time in PETSc solver = 2603 53.50315904617310 secs 2 norm of the error from the provided & computed solution = 3.2424850631419981E-008 Maximum error from the provided & computed solution (infinity norm) = 6.1342397827957029E-010 1 norm of the error from the provided & computed solution 4.8108500117322262E-006 -- 2 pes: hvnguyen:sapphire09% yod -np 2 ./test_matrix -ksp_type cg -pc_type bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... reading lrn ... reading cmatrix and rld ... Number of iterations and Time in PETSc solver = 17934 213.6974561214447 secs 2 norm of the error from the provided & computed solution = 2.4050101709163445E-008 Maximum error from the provided & computed solution (infinity norm) = 4.5970760531588439E-010 1 norm of the error from the provided & computed solution 3.0008169642927509E-006 FORTRAN STOP -- 4 peshvnguyen:sapphire09% yod -np 4 ./test_matrix -ksp_type cg -pc_type bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... reading lrn ... reading cmatrix and rld ... Number of iterations and Time in PETSc solver = 19363 110.4340059757233 secs 2 norm of the error from the provided & computed solution = 1.5643187342954139E-008 Maximum error from the provided & computed solution (infinity norm) = 3.4082958677572606E-010 1 norm of the error from the provided & computed solution 2.0655232735563973E-006 --- 8 pes hvnguyen:sapphire09% yod -np 8 ./test_matrix -ksp_type cg -pc_type bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... reading lrn ... reading cmatrix and rld ... Number of iterations and Time in PETSc solver = 25953 66.90357208251953 secs 2 norm of the error from the provided & computed solution = 2.0977543976205624E-008 Maximum error from the provided & computed solution (infinity norm) = 4.0076741925076931E-010 1 norm of the error from the provided & computed solution 2.6098351291261861E-006 --- 16 pes hvnguyen:sapphire09% yod -np 16 ./test_matrix -ksp_type cg -pc_type bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... reading lrn ... reading cmatrix and rld ... Number of iterations and Time in PETSc solver = 25842 35.60693192481995 secs 2 norm of the error from the provided & computed solution = 1.5121379940662423E-008 Maximum error from the provided & computed solution (infinity norm) = 3.1954527912603226E-010 1 norm of the error from the provided & computed solution 2.0324863426877444E-006 -------------- next part -------------- A non-text attachment was scrubbed... Name: test_matrix.F Type: application/octet-stream Size: 5185 bytes Desc: test_matrix.F URL: From knepley at gmail.com Thu Sep 18 13:57:31 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 18 Sep 2008 13:57:31 -0500 Subject: Petsc solver question In-Reply-To: References: Message-ID: On Thu, Sep 18, 2008 at 1:31 PM, Nguyen, Hung V ERDC-ITL-MS wrote: > All, > > I have a test code that reads matrix in CSR format, rhs and provided > solution. I run this code on Cray XT3 system with 1, 2, 4, 8 and 16 cores. > The timing spent on KSP are 53.50, 213.69, 110.43, 66.90, and 35.60 secs for > 1, 2, 4, 8, and 16 cores; respectively. Why is a running time on 1 core > smaller than others (2, 4, 8 cores)? Does re-ordering of matrix help? Did I > do something wrong in the code? It appears that you are using Block-Jacobi ILU(0). This is a completely different preconditioner for each different number of processes. Matt > Thanks for your help. > > Note: the matrix, rhs, provided and computed solution are written to matlab > files via petsc functions. It seems to me that we got the right solution > (using matlab to verify). Then, Here is some info about matrix A: > > hvnguyen:sapphire01% head matrix.m > % Size = 59409 59409 > % Nonzeros = 1113875 > zzz = zeros(1113875,3); > >>> cond (Mat_0) > Warning: Using CONDEST instead of COND for sparse matrix. >> In cond at 28 > > ans = > > 3.9310e+09 > > -- 1 pes: > hvnguyen:sapphire09% yod -np 1 ./test_matrix -ksp_type cg -pc_type bjacobi > -ksp_rtol 1.0e-15 -ksp_max_it 50000 > yod: -sz is 1, so -proc is reset from VN to 0 > reading nlrn ... > reading lrn ... > reading cmatrix and rld ... > Number of iterations and Time in PETSc solver = 2603 > 53.50315904617310 secs > 2 norm of the error from the provided & computed solution = > 3.2424850631419981E-008 > Maximum error from the provided & computed solution (infinity norm) = > 6.1342397827957029E-010 > 1 norm of the error from the provided & computed solution > 4.8108500117322262E-006 > > -- 2 pes: > hvnguyen:sapphire09% yod -np 2 ./test_matrix -ksp_type cg -pc_type bjacobi > -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... > reading lrn ... > reading cmatrix and rld ... > Number of iterations and Time in PETSc solver = 17934 > 213.6974561214447 secs > 2 norm of the error from the provided & computed solution = > 2.4050101709163445E-008 > Maximum error from the provided & computed solution (infinity norm) = > 4.5970760531588439E-010 > 1 norm of the error from the provided & computed solution > 3.0008169642927509E-006 > FORTRAN STOP > > -- 4 peshvnguyen:sapphire09% yod -np 4 ./test_matrix -ksp_type cg -pc_type > bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... > reading lrn ... > reading cmatrix and rld ... > Number of iterations and Time in PETSc solver = 19363 > 110.4340059757233 secs > 2 norm of the error from the provided & computed solution = > 1.5643187342954139E-008 > Maximum error from the provided & computed solution (infinity norm) = > 3.4082958677572606E-010 > 1 norm of the error from the provided & computed solution > 2.0655232735563973E-006 > > --- 8 pes > hvnguyen:sapphire09% yod -np 8 ./test_matrix -ksp_type cg -pc_type bjacobi > -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... > reading lrn ... > reading cmatrix and rld ... > Number of iterations and Time in PETSc solver = 25953 > 66.90357208251953 secs > 2 norm of the error from the provided & computed solution = > 2.0977543976205624E-008 > Maximum error from the provided & computed solution (infinity norm) = > 4.0076741925076931E-010 > 1 norm of the error from the provided & computed solution > 2.6098351291261861E-006 > > --- 16 pes > hvnguyen:sapphire09% yod -np 16 ./test_matrix -ksp_type cg -pc_type bjacobi > -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... > reading lrn ... > reading cmatrix and rld ... > Number of iterations and Time in PETSc solver = 25842 > 35.60693192481995 secs > 2 norm of the error from the provided & computed solution = > 1.5121379940662423E-008 > Maximum error from the provided & computed solution (infinity norm) = > 3.1954527912603226E-010 > 1 norm of the error from the provided & computed solution > 2.0324863426877444E-006 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From Hung.V.Nguyen at usace.army.mil Thu Sep 18 16:51:08 2008 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Thu, 18 Sep 2008 16:51:08 -0500 Subject: Petsc solver question In-Reply-To: References: Message-ID: Hello Matt, Thank you for your answer. I rerun with using jacobi and got the result. -Hung -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Matthew Knepley Sent: Thursday, September 18, 2008 1:58 PM To: petsc-users at mcs.anl.gov Subject: Re: Petsc solver question On Thu, Sep 18, 2008 at 1:31 PM, Nguyen, Hung V ERDC-ITL-MS wrote: > All, > > I have a test code that reads matrix in CSR format, rhs and provided > solution. I run this code on Cray XT3 system with 1, 2, 4, 8 and 16 cores. > The timing spent on KSP are 53.50, 213.69, 110.43, 66.90, and 35.60 > secs for 1, 2, 4, 8, and 16 cores; respectively. Why is a running time > on 1 core smaller than others (2, 4, 8 cores)? Does re-ordering of > matrix help? Did I do something wrong in the code? It appears that you are using Block-Jacobi ILU(0). This is a completely different preconditioner for each different number of processes. Matt > Thanks for your help. > > Note: the matrix, rhs, provided and computed solution are written to > matlab files via petsc functions. It seems to me that we got the right > solution (using matlab to verify). Then, Here is some info about matrix A: > > hvnguyen:sapphire01% head matrix.m > % Size = 59409 59409 > % Nonzeros = 1113875 > zzz = zeros(1113875,3); > >>> cond (Mat_0) > Warning: Using CONDEST instead of COND for sparse matrix. >> In cond at 28 > > ans = > > 3.9310e+09 > > -- 1 pes: > hvnguyen:sapphire09% yod -np 1 ./test_matrix -ksp_type cg -pc_type > bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 > yod: -sz is 1, so -proc is reset from VN to 0 reading nlrn ... > reading lrn ... > reading cmatrix and rld ... > Number of iterations and Time in PETSc solver = 2603 > 53.50315904617310 secs > 2 norm of the error from the provided & computed solution = > 3.2424850631419981E-008 > Maximum error from the provided & computed solution (infinity norm) = > 6.1342397827957029E-010 > 1 norm of the error from the provided & computed solution > 4.8108500117322262E-006 > > -- 2 pes: > hvnguyen:sapphire09% yod -np 2 ./test_matrix -ksp_type cg -pc_type > bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... > reading lrn ... > reading cmatrix and rld ... > Number of iterations and Time in PETSc solver = 17934 > 213.6974561214447 secs > 2 norm of the error from the provided & computed solution = > 2.4050101709163445E-008 > Maximum error from the provided & computed solution (infinity norm) = > 4.5970760531588439E-010 > 1 norm of the error from the provided & computed solution > 3.0008169642927509E-006 > FORTRAN STOP > > -- 4 peshvnguyen:sapphire09% yod -np 4 ./test_matrix -ksp_type cg > -pc_type bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... > reading lrn ... > reading cmatrix and rld ... > Number of iterations and Time in PETSc solver = 19363 > 110.4340059757233 secs > 2 norm of the error from the provided & computed solution = > 1.5643187342954139E-008 > Maximum error from the provided & computed solution (infinity norm) = > 3.4082958677572606E-010 > 1 norm of the error from the provided & computed solution > 2.0655232735563973E-006 > > --- 8 pes > hvnguyen:sapphire09% yod -np 8 ./test_matrix -ksp_type cg -pc_type > bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... > reading lrn ... > reading cmatrix and rld ... > Number of iterations and Time in PETSc solver = 25953 > 66.90357208251953 secs > 2 norm of the error from the provided & computed solution = > 2.0977543976205624E-008 > Maximum error from the provided & computed solution (infinity norm) = > 4.0076741925076931E-010 > 1 norm of the error from the provided & computed solution > 2.6098351291261861E-006 > > --- 16 pes > hvnguyen:sapphire09% yod -np 16 ./test_matrix -ksp_type cg -pc_type > bjacobi -ksp_rtol 1.0e-15 -ksp_max_it 50000 reading nlrn ... > reading lrn ... > reading cmatrix and rld ... > Number of iterations and Time in PETSc solver = 25842 > 35.60693192481995 secs > 2 norm of the error from the provided & computed solution = > 1.5121379940662423E-008 > Maximum error from the provided & computed solution (infinity norm) = > 3.1954527912603226E-010 > 1 norm of the error from the provided & computed solution > 2.0324863426877444E-006 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From recrusader at gmail.com Sat Sep 20 00:33:53 2008 From: recrusader at gmail.com (Yujie) Date: Fri, 19 Sep 2008 21:33:53 -0800 Subject: Petsc and Slepc with multiple process groups In-Reply-To: References: <7ff0ee010809171559i4771680bq228ed04d7f827fd3@mail.gmail.com> <7ff0ee010809171625o235da9d7ha6ca6013314a862e@mail.gmail.com> <7ff0ee010809171722n74ba3830h7935014f7283eae5@mail.gmail.com> Message-ID: <7ff0ee010809192233h63753353na670a783393ac2d0@mail.gmail.com> Dear Lisandro: Thank you very much for your help. Our basic idea is main() { step 1: Initialize Petsc and Slepc; step 2: Use Petsc; (use all N nodes in one process group) step 3: Use Slepc; (N nodes is divided into M process groups. these groups are indepedent. However, they need to communicate with each other) step 4: Use Petsc; (use all N nodes in one process group) } Assuming, the dimension of the whole matrix is N*N when using all Nodes in one process group. At the end of step 2, I need to get M different matrices and vectors (I should be able to make them be stored in M single different nodes which belong to M different process group.). Before step3, I need to scatter M matrices and vectors in M different process groups. Then, I can compute based on M matrices and vectors in M subcommunication domains. After calculating, I need to collect M solution vectors back to their parent communication domain. In Step4, I use this solution to further compute. Could you give me any further advice? thanks again. Regards, Yujie On Thu, Sep 18, 2008 at 5:44 AM, Lisandro Dalcin wrote: > On Wed, Sep 17, 2008 at 9:22 PM, Yujie wrote: > > Thank you very much, Lisandro. You are right. It look like a little > > difficult to "transfer" data from one node to "N" nodes or from N nodes > to M > > nodes. My method is to first send all the data in a node and to > redistribute > > it in "N" or "M" nodes. do you have any idea about it? is it > time-consuming? > > In Petsc, how to support such type of operations? thanks a lot. > > Mmm.. I believe there is not way to do that with PETSc. You just have > to make MPI calls. Perhaps if you can give me a bit more of details > about your communication patters, then I can give you a good > suggestion. > > > > > Regards, > > > > Yujie > > > > On Wed, Sep 17, 2008 at 5:05 PM, Lisandro Dalcin > wrote: > >> > >> A long as you create your SLEPc objects with the appropriate > >> communicator (ie. the one obtained with MPI_Comm_split), then all > >> should just work. Of course, you will have to make appropriate MPI > >> calls to 'transfer' data from your N group to the many M groups, and > >> the other way to collect results. > >> > >> > >> On Wed, Sep 17, 2008 at 8:25 PM, Yujie wrote: > >> > You are right :). I am thinking the whole framwork for my codes. thank > >> > you, > >> > Lisandro. In Step 3, there are different M slepc-based process groups, > >> > which > >> > should mean M communication domain for Petsc and Slepc (I have created > a > >> > communication domain for them) is it ok? thanks again. > >> > > >> > Regards, > >> > > >> > Yujie > >> > > >> > On Wed, Sep 17, 2008 at 4:08 PM, Lisandro Dalcin > >> > wrote: > >> >> > >> >> I bet you have not even tried to actually implent and run this :-). > >> >> > >> >> This should work. If not, I would consider that a bug. Let us know > of > >> >> any problem you have. > >> >> > >> >> > >> >> On Wed, Sep 17, 2008 at 7:59 PM, Yujie wrote: > >> >> > Hi, Petsc Developer: > >> >> > > >> >> > Currently, I am using Slepc for my application. It is based on > Petsc. > >> >> > > >> >> > Assuming I have a cluster with N nodes. > >> >> > > >> >> > My codes are like > >> >> > > >> >> > main() > >> >> > > >> >> > { > >> >> > > >> >> > step 1: Initialize Petsc and Slepc; > >> >> > > >> >> > step 2: Use Petsc; (use all N nodes in one process group) > >> >> > > >> >> > step 3: Use Slepc; (N nodes is divided into M process groups. these > >> >> > groups > >> >> > are indepedent. However, they need to communicate with each other) > >> >> > > >> >> > step 4: Use Petsc; (use all N nodes in one process group) > >> >> > > >> >> > } > >> >> > > >> >> > My method is: > >> >> > > >> >> > when using Slepc, MPI_Comm_split() is used to divide N nodes into M > >> >> > process > >> >> > groups which means to generate M communication domains. Then, > >> >> > MPI_Intercomm_create() creates inter-group communication domain to > >> >> > process > >> >> > the communication between different M process groups. > >> >> > > >> >> > I don't know whether this method is ok regarding Petsc and Slepc. > >> >> > Because > >> >> > Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is > >> >> > initialized > >> >> > with all N nodes in a communication domain. Petsc in Step 2 uses > this > >> >> > communication domain. However, in Step 3, I need to divide all N > >> >> > nodes > >> >> > and > >> >> > generate M communication domains. I don't know how Petsc and Slepc > >> >> > can > >> >> > process this change? If the method doesn't work, could you give me > >> >> > some > >> >> > advice? thanks a lot. > >> >> > > >> >> > Regards, > >> >> > > >> >> > Yujie > >> >> > >> >> > >> >> > >> >> -- > >> >> Lisandro Dalc?n > >> >> --------------- > >> >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > >> >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > >> >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > >> >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina > >> >> Tel/Fax: +54-(0)342-451.1594 > >> >> > >> > > >> > > >> > >> > >> > >> -- > >> Lisandro Dalc?n > >> --------------- > >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina > >> Tel/Fax: +54-(0)342-451.1594 > >> > > > > > > > > -- > Lisandro Dalc?n > --------------- > Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > PTLC - G?emes 3450, (3000) Santa Fe, Argentina > Tel/Fax: +54-(0)342-451.1594 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From recrusader at gmail.com Sat Sep 20 00:36:55 2008 From: recrusader at gmail.com (Yujie) Date: Fri, 19 Sep 2008 21:36:55 -0800 Subject: Petsc and Slepc with multiple process groups In-Reply-To: References: <7ff0ee010809171559i4771680bq228ed04d7f827fd3@mail.gmail.com> <7ff0ee010809171625o235da9d7ha6ca6013314a862e@mail.gmail.com> <7ff0ee010809171722n74ba3830h7935014f7283eae5@mail.gmail.com> Message-ID: <7ff0ee010809192236q4f014471j4aa5e57a9944a022@mail.gmail.com> Dear Hong: Thank you very much for your information. The basic framework for generating subcommunicator is useful for me. However, I need to scatter matrixs and vectors. Just like what you said, I need to consider how to scatter data. thanks a lot. Regards, Yujie On Thu, Sep 18, 2008 at 6:19 AM, Hong Zhang wrote: > > Yujie, > > See the data structure "PetscSubcomm" > in ~petsc/include/petscsys.h > > An example of its implementation is PCREDUNDANT > (see src/ksp/pc/impls/redundant/redundant.c), > for which, we first split the parent communicator with N processors > into n subcommunicator for parallel LU preconditioner, > then scatter the solution from the subcommunicator back to > the parent communicator. > > Note, the scatter used there is unique for our particular > application. You likely need to implement your own scattering > according to your need. > > Hong > > > On Thu, 18 Sep 2008, Lisandro Dalcin wrote: > > On Wed, Sep 17, 2008 at 9:22 PM, Yujie wrote: >> >>> Thank you very much, Lisandro. You are right. It look like a little >>> difficult to "transfer" data from one node to "N" nodes or from N nodes >>> to M >>> nodes. My method is to first send all the data in a node and to >>> redistribute >>> it in "N" or "M" nodes. do you have any idea about it? is it >>> time-consuming? >>> In Petsc, how to support such type of operations? thanks a lot. >>> >> >> Mmm.. I believe there is not way to do that with PETSc. You just have >> to make MPI calls. Perhaps if you can give me a bit more of details >> about your communication patters, then I can give you a good >> suggestion. >> >> >>> Regards, >>> >>> Yujie >>> >>> On Wed, Sep 17, 2008 at 5:05 PM, Lisandro Dalcin >>> wrote: >>> >>>> >>>> A long as you create your SLEPc objects with the appropriate >>>> communicator (ie. the one obtained with MPI_Comm_split), then all >>>> should just work. Of course, you will have to make appropriate MPI >>>> calls to 'transfer' data from your N group to the many M groups, and >>>> the other way to collect results. >>>> >>>> >>>> On Wed, Sep 17, 2008 at 8:25 PM, Yujie wrote: >>>> >>>>> You are right :). I am thinking the whole framwork for my codes. thank >>>>> you, >>>>> Lisandro. In Step 3, there are different M slepc-based process groups, >>>>> which >>>>> should mean M communication domain for Petsc and Slepc (I have created >>>>> a >>>>> communication domain for them) is it ok? thanks again. >>>>> >>>>> Regards, >>>>> >>>>> Yujie >>>>> >>>>> On Wed, Sep 17, 2008 at 4:08 PM, Lisandro Dalcin >>>>> wrote: >>>>> >>>>>> >>>>>> I bet you have not even tried to actually implent and run this :-). >>>>>> >>>>>> This should work. If not, I would consider that a bug. Let us know of >>>>>> any problem you have. >>>>>> >>>>>> >>>>>> On Wed, Sep 17, 2008 at 7:59 PM, Yujie wrote: >>>>>> >>>>>>> Hi, Petsc Developer: >>>>>>> >>>>>>> Currently, I am using Slepc for my application. It is based on Petsc. >>>>>>> >>>>>>> Assuming I have a cluster with N nodes. >>>>>>> >>>>>>> My codes are like >>>>>>> >>>>>>> main() >>>>>>> >>>>>>> { >>>>>>> >>>>>>> step 1: Initialize Petsc and Slepc; >>>>>>> >>>>>>> step 2: Use Petsc; (use all N nodes in one process group) >>>>>>> >>>>>>> step 3: Use Slepc; (N nodes is divided into M process groups. these >>>>>>> groups >>>>>>> are indepedent. However, they need to communicate with each other) >>>>>>> >>>>>>> step 4: Use Petsc; (use all N nodes in one process group) >>>>>>> >>>>>>> } >>>>>>> >>>>>>> My method is: >>>>>>> >>>>>>> when using Slepc, MPI_Comm_split() is used to divide N nodes into M >>>>>>> process >>>>>>> groups which means to generate M communication domains. Then, >>>>>>> MPI_Intercomm_create() creates inter-group communication domain to >>>>>>> process >>>>>>> the communication between different M process groups. >>>>>>> >>>>>>> I don't know whether this method is ok regarding Petsc and Slepc. >>>>>>> Because >>>>>>> Slepc is developed based on Petsc. In Step 1, Petsc and Slepc is >>>>>>> initialized >>>>>>> with all N nodes in a communication domain. Petsc in Step 2 uses this >>>>>>> communication domain. However, in Step 3, I need to divide all N >>>>>>> nodes >>>>>>> and >>>>>>> generate M communication domains. I don't know how Petsc and Slepc >>>>>>> can >>>>>>> process this change? If the method doesn't work, could you give me >>>>>>> some >>>>>>> advice? thanks a lot. >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> Yujie >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Lisandro Dalc?n >>>>>> --------------- >>>>>> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >>>>>> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >>>>>> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >>>>>> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >>>>>> Tel/Fax: +54-(0)342-451.1594 >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Lisandro Dalc?n >>>> --------------- >>>> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >>>> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >>>> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >>>> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >>>> Tel/Fax: +54-(0)342-451.1594 >>>> >>>> >>> >>> >> >> >> -- >> Lisandro Dalc?n >> --------------- >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >> Tel/Fax: +54-(0)342-451.1594 >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bui at calcreek.com Thu Sep 18 12:38:46 2008 From: bui at calcreek.com (Thuc Bui) Date: Thu, 18 Sep 2008 10:38:46 -0700 Subject: Unable to build ex2 with Visual Studio due to linking errors Message-ID: <8AA9F1C7EAAE4EBE963D50EC3E82D0F7@aphrodite> Dear all, Hope you can help me out since I am quite lost. I successfully built petsc-2.3.3-p13 with Visual Studio 03 using C++ with the following configurations, which relist from the configure.log. Configure Options: --with-cc="win32fe cl" --with-fc=0 --with-cxx="win32fe cl" --download-c-blas-lapack --with-debugging=0 --useThreads=0 --with-shared=0 --configModules=PETSc.Configure --optionsModule=PETSc.compilerOptions Working directory: /home/bbwannabe/Petsc/latest Python version: 2.5.1 (r251:54863, May 18 2007, 16:56:43) Unfortunately, I was unable to build the ex2 example in the tutorials directory even from the console with three unresolved external linking errors as shown below. What other libraries do I need to include in the Makefile to get ex2 built? I am using the Makefile given in the Tutorials directory without any modification. Many thanks in advance for your help, Thuc ~/Petsc/latest/src/ksp/ksp/examples/tutorials $ make ex2 /home/bbwannabe/Petsc/latest/bin/win32fe/win32fe cl -o ex2.o -c -wd4996 -MT -I/home/bbwannabe/Petsc/latest/src/dm/mesh/sieve -I/home/bbwannabe/Petsc/latest -I/home/bbwannabe/Petsc/latest/bmake/cygwin-c-opt -I/home/bbwannabe/Petsc/latest/include -I/cygdrive/c/Program Files/MPICH2/include -D__SDIR__="src/ksp/ksp/examples/tutorials/" ex2.c ex2.c /home/bbwannabe/Petsc/latest/bin/win32fe/win32fe cl -wd4996 -MT -o ex2 ex2.o -L/home/bbwannabe/Petsc/latest/lib/cygwin-c-opt -L/home/bbwannabe/Petsc/latest/lib/cygwin-c-opt -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc /cygdrive/c/Program\ Files/MPICH2/lib/fmpich2.lib /cygdrive/c/Program\ Files/MPICH2/lib/mpi.lib -L/home/bbwannabe/Petsc/latest/externalpackages/f2cblaslapack/cygwin-c-opt -L/home/bbwannabe/Petsc/latest/externalpackages/f2cblaslapack/cygwin-c-opt -lf2clapack -L/home/bbwannabe/Petsc/latest/externalpackages/f2cblaslapack/cygwin-c-opt -L/home/bbwannabe/Petsc/latest/externalpackages/f2cblaslapack/cygwin-c-opt -lf2cblas Gdi32.lib User32.lib Advapi32.lib Kernel32.lib Ws2_32.lib libpetscmat.lib(mpiaij.o) : error LNK2019: unresolved external symbol _MatConvert_MPIAIJ_MPICSRPERM referenced in function _MatCreate_MPIAIJ libpetscmat.lib(matregis.o) : error LNK2019: unresolved external symbol _MatCreate_MPICSRPERM referenced in function _MatRegisterAll libpetscmat.lib(matregis.o) : error LNK2019: unresolved external symbol _MatCreate_CSRPERM referenced in function _MatRegisterAll C:\cygwin\home\BBWANN~1\Petsc\latest\src\ksp\ksp\examples\TUTORI~1\ex2.exe : fatal error LNK1120: 3 unresolved externals /usr/bin/rm -f ex2.o -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Sep 20 13:25:05 2008 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 20 Sep 2008 13:25:05 -0500 Subject: Unable to build ex2 with Visual Studio due to linking errors In-Reply-To: <8AA9F1C7EAAE4EBE963D50EC3E82D0F7@aphrodite> References: <8AA9F1C7EAAE4EBE963D50EC3E82D0F7@aphrodite> Message-ID: On Thu, Sep 18, 2008 at 12:38 PM, Thuc Bui wrote: > Dear all, > > Unfortunately, I was unable to build the ex2 example in the tutorials > directory even from the console with three unresolved external linking > errors as shown below. What other libraries do I need to include in the > Makefile to get ex2 built? I am using the Makefile given in the Tutorials > directory without any modification. This is the same error which you mailed in before. A file has not been compiled on your system. You need to build in that directory cd src/mat/impls/aij/mpi/csrperm make Matt > Many thanks in advance for your help, > > Thuc > > > > > > ~/Petsc/latest/src/ksp/ksp/examples/tutorials > > $ make ex2 > > /home/bbwannabe/Petsc/latest/bin/win32fe/win32fe cl -o ex2.o -c -wd4996 -MT > -I/home/bbwannabe/Petsc/latest/src/dm/mesh/sieve > -I/home/bbwannabe/Petsc/latest > -I/home/bbwannabe/Petsc/latest/bmake/cygwin-c-opt > -I/home/bbwannabe/Petsc/latest/include -I/cygdrive/c/Program > Files/MPICH2/include -D__SDIR__="src/ksp/ksp/examples/tutorials/" ex2.c > > ex2.c > > /home/bbwannabe/Petsc/latest/bin/win32fe/win32fe cl -wd4996 -MT -o ex2 > ex2.o -L/home/bbwannabe/Petsc/latest/lib/cygwin-c-opt > -L/home/bbwannabe/Petsc/latest/lib/cygwin-c-opt -lpetscksp -lpetscdm > -lpetscmat -lpetscvec -lpetsc /cygdrive/c/Program\ > Files/MPICH2/lib/fmpich2.lib /cygdrive/c/Program\ Files/MPICH2/lib/mpi.lib > -L/home/bbwannabe/Petsc/latest/externalpackages/f2cblaslapack/cygwin-c-opt > -L/home/bbwannabe/Petsc/latest/externalpackages/f2cblaslapack/cygwin-c-opt > -lf2clapack > -L/home/bbwannabe/Petsc/latest/externalpackages/f2cblaslapack/cygwin-c-opt > -L/home/bbwannabe/Petsc/latest/externalpackages/f2cblaslapack/cygwin-c-opt > -lf2cblas Gdi32.lib User32.lib Advapi32.lib Kernel32.lib Ws2_32.lib > > libpetscmat.lib(mpiaij.o) : error LNK2019: unresolved external symbol > _MatConvert_MPIAIJ_MPICSRPERM referenced in function _MatCreate_MPIAIJ > > libpetscmat.lib(matregis.o) : error LNK2019: unresolved external symbol > _MatCreate_MPICSRPERM referenced in function _MatRegisterAll > > libpetscmat.lib(matregis.o) : error LNK2019: unresolved external symbol > _MatCreate_CSRPERM referenced in function _MatRegisterAll > > C:\cygwin\home\BBWANN~1\Petsc\latest\src\ksp\ksp\examples\TUTORI~1\ex2.exe : > fatal error LNK1120: 3 unresolved externals > > /usr/bin/rm -f ex2.o > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bhatiamanav at gmail.com Sun Sep 21 11:11:29 2008 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Sun, 21 Sep 2008 12:11:29 -0400 Subject: block matrices Message-ID: <3CC02FBD-8B47-4A94-B383-F258E02C9521@gmail.com> Hi, I have an application in which I have multiple sparse matrices, which together form one bigger matrix. For example: if S1, S2, S3, S4 and S5 are my sparse matrices, I need to create a matrix B of the following form B = 3 x 3 blocks row 1 of B = 0, S1, 0 row 2 of B = S2, 0 , S3 row 3 of B = S4, S5, 0 If I build the S1... S5 independently, is there a way for me to directly embed these matrices into B without having to explicitly copy the values from each matrix? I would appreciate any help. Please also, let me know if there is an example code somewhere about this. Regards, Manav From hzhang at mcs.anl.gov Sun Sep 21 11:50:10 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Sun, 21 Sep 2008 11:50:10 -0500 (CDT) Subject: block matrices In-Reply-To: <3CC02FBD-8B47-4A94-B383-F258E02C9521@gmail.com> References: <3CC02FBD-8B47-4A94-B383-F258E02C9521@gmail.com> Message-ID: Manav, On Sun, 21 Sep 2008, Manav Bhatia wrote: > Hi, > > I have an application in which I have multiple sparse matrices, > which together form one bigger matrix. For example: if S1, S2, S3, S4 > and S5 are my sparse matrices, I need to create a matrix B of the > following form > > B = 3 x 3 blocks > > row 1 of B = 0, S1, 0 > row 2 of B = S2, 0 , S3 > row 3 of B = S4, S5, 0 > > If I build the S1... S5 independently, is there a way for me to > directly embed these matrices into B without having to explicitly copy > the values from each matrix? No, we do not have this function. You can use MatSetValues() http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatSetValues.html to insert a block of values. Hong > I would appreciate any help. Please also, let me know if there is an > example code somewhere about this. > > Regards, > Manav > > > > From cjchen at math.msu.edu Sun Sep 21 13:22:51 2008 From: cjchen at math.msu.edu (Chen, Changjun) Date: Sun, 21 Sep 2008 14:22:51 -0400 Subject: Some subroutine in PETSc is a little slow Message-ID: Dear Sir, I have be using PETSc for many days. Recently I find one subroutine in PETSc is a little slow, It is an initialization subroutine to fill in the matrix elements: call MatSetValues(A,ione,II,ione,JJ,v,INSERT_VALUES,ierr) I find that the total time to fill in the matrix data is even much longer than the iteration time to solve the system call KSPSolve(ksp,b,x,ierr) How could this happen? Could I accelerate the time for filling in the data? Sincerely, Changjun Chen From knepley at gmail.com Sun Sep 21 13:41:18 2008 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 21 Sep 2008 13:41:18 -0500 Subject: Some subroutine in PETSc is a little slow In-Reply-To: References: Message-ID: It is very likely that you have not preallocated the matrix. There is a section on this in the manual. Matt On Sun, Sep 21, 2008 at 1:22 PM, Chen, Changjun wrote: > Dear Sir, > I have be using PETSc for many days. Recently I find one subroutine in PETSc is a little slow, It is an initialization subroutine to fill in the matrix elements: > > call MatSetValues(A,ione,II,ione,JJ,v,INSERT_VALUES,ierr) > > I find that the total time to fill in the matrix data is even much longer than the iteration time to solve the system > call KSPSolve(ksp,b,x,ierr) > How could this happen? Could I accelerate the time for filling in the data? > > Sincerely, > Changjun Chen -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Sun Sep 21 19:08:59 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 21 Sep 2008 20:08:59 -0400 Subject: Some subroutine in PETSc is a little slow In-Reply-To: References: Message-ID: http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#slow On Sep 21, 2008, at 2:22 PM, Chen, Changjun wrote: > Dear Sir, > I have be using PETSc for many days. Recently I find one > subroutine in PETSc is a little slow, It is an initialization > subroutine to fill in the matrix elements: > > call MatSetValues(A,ione,II,ione,JJ,v,INSERT_VALUES,ierr) > > I find that the total time to fill in the matrix data is even > much longer than the iteration time to solve the system > call KSPSolve(ksp,b,x,ierr) > How could this happen? Could I accelerate the time for filling > in the data? > > Sincerely, > Changjun Chen > From hakan.jakobsson at math.umu.se Mon Sep 22 03:08:32 2008 From: hakan.jakobsson at math.umu.se (=?ISO-8859-1?Q?H=E5kan_Jakobsson?=) Date: Mon, 22 Sep 2008 10:08:32 +0200 Subject: ordinary vector to ghost vector Message-ID: <180D0389-33CB-4E21-82CC-97F1D4BF649E@math.umu.se> Hi, I wounder, is it possible to copy the elements of a vector, created with VecCreateMPI, to a vector with ghost values simply using VecCopy and then do VecGhostUpdate to initiate the ghost values? Best Regards, H?kan From jed at 59A2.org Mon Sep 22 03:43:57 2008 From: jed at 59A2.org (Jed Brown) Date: Mon, 22 Sep 2008 10:43:57 +0200 Subject: block matrices In-Reply-To: <3CC02FBD-8B47-4A94-B383-F258E02C9521@gmail.com> References: <3CC02FBD-8B47-4A94-B383-F258E02C9521@gmail.com> Message-ID: <20080922084357.GF6975@brakk.ethz.ch> On Sun 2008-09-21 12:11, Manav Bhatia wrote: > B = 3 x 3 blocks > > row 1 of B = 0, S1, 0 > row 2 of B = S2, 0 , S3 > row 3 of B = S4, S5, 0 What sort of preconditioner do you intend to use? If you are using a direct solver, then you will need to explicitly assemble B. This can be done in a black-box manner from the sub-matrices, but it might be better to assemble B and extract the submatrices using MatGetSubMatrix() (assuming you need them elsewhere). If you will be using an iterative solver, normal preconditioners will fail because the matrix is indefinite. In this case, you can create a MATSHELL (which implements MatMult, the action of B on a vector) and a PCSHELL which approximately inverts B using a block factorization. Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From fernandez858 at gmail.com Mon Sep 22 05:31:04 2008 From: fernandez858 at gmail.com (Michel Cancelliere) Date: Mon, 22 Sep 2008 12:31:04 +0200 Subject: User-difined PC Message-ID: <7f18de3b0809220331nf45826w9abdfce5fa926454@mail.gmail.com> Hello, I have problems with the implementation of a user-defined pc, basically my program is in a cycle, for with which he is called by matlab each time it seeks to solve a linear system, the problem is that in the first iteration of the for-cycle the preconditioner works very well, then fails to convergence. It may be some parameters that I setting wrong? I'am attaching my program code. Thank you, Michel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: solv.c Type: text/x-csrc Size: 28643 bytes Desc: not available URL: From knepley at gmail.com Mon Sep 22 07:51:29 2008 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 22 Sep 2008 07:51:29 -0500 Subject: ordinary vector to ghost vector In-Reply-To: <180D0389-33CB-4E21-82CC-97F1D4BF649E@math.umu.se> References: <180D0389-33CB-4E21-82CC-97F1D4BF649E@math.umu.se> Message-ID: On Mon, Sep 22, 2008 at 3:08 AM, H?kan Jakobsson wrote: > Hi, > I wounder, is it possible to copy the elements of a vector, created with > VecCreateMPI, to a vector with ghost values simply using VecCopy and then do > VecGhostUpdate to initiate the ghost values? The VecCopy() will copy only the local portion of the vector, and then the ghost update will fill in the rest, so I think it will do what you want. Matt > Best Regards, > H?kan -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From knepley at gmail.com Mon Sep 22 07:49:09 2008 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 22 Sep 2008 07:49:09 -0500 Subject: User-difined PC In-Reply-To: <7f18de3b0809220331nf45826w9abdfce5fa926454@mail.gmail.com> References: <7f18de3b0809220331nf45826w9abdfce5fa926454@mail.gmail.com> Message-ID: On Mon, Sep 22, 2008 at 5:31 AM, Michel Cancelliere wrote: > Hello, > > I have problems with the implementation of a user-defined pc, basically my > program is in a cycle, for with which he is called by matlab each time it > seeks to solve a linear system, the problem is that in the first iteration > of the for-cycle the preconditioner works very well, then fails to > convergence. It may be some parameters that I setting wrong? > I'am attaching my program code. I cannot see anything wrong just by looking at the code. To test it, I would send the same matrix in multiple times and make sure that the convergence was the same. If not, it should be easy to track back in the debugger to the first different residual. Matt > Thank you, > > Michel -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From dave.mayhem23 at gmail.com Mon Sep 22 07:59:50 2008 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 22 Sep 2008 22:59:50 +1000 Subject: User-difined PC In-Reply-To: <7f18de3b0809220331nf45826w9abdfce5fa926454@mail.gmail.com> References: <7f18de3b0809220331nf45826w9abdfce5fa926454@mail.gmail.com> Message-ID: <956373f0809220559j24d6092au1ac5d169b467976d@mail.gmail.com> Michel, I noticed a few things. Not sure any would cause an error, but 1) You should hand control of destroying the user data type to the PCSEHLL via PCShellSetDestroy(). 2) I would also let the PCSHELL decide when the object should be setup by calling PCShellSetSetUp(). I suppose you will need to add a reference to A or the PC on each of your context's so you are able to extract the blocks. Why is the first pointer in PCSHELL operations the user context whereas in MATSHELL the first pointer is of type Mat ?? Seems slightly inconsistent. 3) I would call MatDestroyMatrices() rather than just destroying the individual matrices. The array of matrices is not being released in your code. Cheers, Dave On Mon, Sep 22, 2008 at 8:31 PM, Michel Cancelliere wrote: > Hello, > > I have problems with the implementation of a user-defined pc, basically my > program is in a cycle, for with which he is called by matlab each time it > seeks to solve a linear system, the problem is that in the first iteration > of the for-cycle the preconditioner works very well, then fails to > convergence. It may be some parameters that I setting wrong? > I'am attaching my program code. > > Thank you, > > Michel > From bsmith at mcs.anl.gov Mon Sep 22 11:08:09 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 22 Sep 2008 11:08:09 -0500 Subject: User-difined PC In-Reply-To: <956373f0809220559j24d6092au1ac5d169b467976d@mail.gmail.com> References: <7f18de3b0809220331nf45826w9abdfce5fa926454@mail.gmail.com> <956373f0809220559j24d6092au1ac5d169b467976d@mail.gmail.com> Message-ID: <113FD893-F8AF-459C-BF83-C2AEF565E5DD@mcs.anl.gov> On Sep 22, 2008, at 7:59 AM, Dave May wrote: > > > Why is the first pointer in PCSHELL operations the user context > whereas in MATSHELL the first pointer is of type Mat ?? Seems slightly > inconsistent. > Good question, these were done at different times and only later did we realize the inconsistency. This is something that should be fixed in petsc-dev. Barry I guess the first argument for the PCSHELL should become the pc. > > > On Mon, Sep 22, 2008 at 8:31 PM, Michel Cancelliere > wrote: >> Hello, >> >> I have problems with the implementation of a user-defined pc, >> basically my >> program is in a cycle, for with which he is called by matlab each >> time it >> seeks to solve a linear system, the problem is that in the first >> iteration >> of the for-cycle the preconditioner works very well, then fails to >> convergence. It may be some parameters that I setting wrong? >> I'am attaching my program code. >> >> Thank you, >> >> Michel >> > From dave.mayhem23 at gmail.com Mon Sep 22 11:14:28 2008 From: dave.mayhem23 at gmail.com (Dave May) Date: Tue, 23 Sep 2008 02:14:28 +1000 Subject: User-difined PC In-Reply-To: <113FD893-F8AF-459C-BF83-C2AEF565E5DD@mcs.anl.gov> References: <7f18de3b0809220331nf45826w9abdfce5fa926454@mail.gmail.com> <956373f0809220559j24d6092au1ac5d169b467976d@mail.gmail.com> <113FD893-F8AF-459C-BF83-C2AEF565E5DD@mcs.anl.gov> Message-ID: <956373f0809220914y4ff0b18awc79a18cf637e32ff@mail.gmail.com> Having the first argument for PCSHELL operations being PC sounds great. That's in keeping with the petsc pattern of the first argument being the object being manipulated. Then users just call PCShellGetContext() within their user defined operations. On Tue, Sep 23, 2008 at 2:08 AM, Barry Smith wrote: > > On Sep 22, 2008, at 7:59 AM, Dave May wrote: > >> >> >> Why is the first pointer in PCSHELL operations the user context >> whereas in MATSHELL the first pointer is of type Mat ?? Seems slightly >> inconsistent. >> > Good question, these were done at different times and only later > did we realize the inconsistency. This is something that should be fixed in > petsc-dev. > > Barry > > I guess the first argument for the PCSHELL should become the pc. > >> >> >> On Mon, Sep 22, 2008 at 8:31 PM, Michel Cancelliere >> wrote: >>> >>> Hello, >>> >>> I have problems with the implementation of a user-defined pc, basically >>> my >>> program is in a cycle, for with which he is called by matlab each time it >>> seeks to solve a linear system, the problem is that in the first >>> iteration >>> of the for-cycle the preconditioner works very well, then fails to >>> convergence. It may be some parameters that I setting wrong? >>> I'am attaching my program code. >>> >>> Thank you, >>> >>> Michel >>> >> > > From bsmith at mcs.anl.gov Mon Sep 22 11:18:38 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 22 Sep 2008 11:18:38 -0500 Subject: User-difined PC In-Reply-To: <956373f0809220914y4ff0b18awc79a18cf637e32ff@mail.gmail.com> References: <7f18de3b0809220331nf45826w9abdfce5fa926454@mail.gmail.com> <956373f0809220559j24d6092au1ac5d169b467976d@mail.gmail.com> <113FD893-F8AF-459C-BF83-C2AEF565E5DD@mcs.anl.gov> <956373f0809220914y4ff0b18awc79a18cf637e32ff@mail.gmail.com> Message-ID: <67FE2AED-34D0-48FC-B8C0-9075B56E8E15@mcs.anl.gov> On Sep 22, 2008, at 11:14 AM, Dave May wrote: > Having the first argument for PCSHELL operations being PC sounds > great. > That's in keeping with the petsc pattern of the first argument being > the object being manipulated. > Then users just call PCShellGetContext() within their user defined > operations. > I think one of the motivators for the current approach is usage from Fortran 77. PCShellGetContext() cannot return much of anything in Fortran 77, not even an array :-( while an array can be passed into the PCApply etc Fortran implementations directly as the first argument. This is unlikely to be a good enough reason for keeping the current form, especially since Fortran 90 has some (not perfect) alternatives. Barry > > On Tue, Sep 23, 2008 at 2:08 AM, Barry Smith > wrote: >> >> On Sep 22, 2008, at 7:59 AM, Dave May wrote: >> >>> >>> >>> Why is the first pointer in PCSHELL operations the user context >>> whereas in MATSHELL the first pointer is of type Mat ?? Seems >>> slightly >>> inconsistent. >>> >> Good question, these were done at different times and only later >> did we realize the inconsistency. This is something that should be >> fixed in >> petsc-dev. >> >> Barry >> >> I guess the first argument for the PCSHELL should become the pc. >> >>> >>> >>> On Mon, Sep 22, 2008 at 8:31 PM, Michel Cancelliere >>> wrote: >>>> >>>> Hello, >>>> >>>> I have problems with the implementation of a user-defined pc, >>>> basically >>>> my >>>> program is in a cycle, for with which he is called by matlab each >>>> time it >>>> seeks to solve a linear system, the problem is that in the first >>>> iteration >>>> of the for-cycle the preconditioner works very well, then fails to >>>> convergence. It may be some parameters that I setting wrong? >>>> I'am attaching my program code. >>>> >>>> Thank you, >>>> >>>> Michel >>>> >>> >> >> > From recrusader at gmail.com Mon Sep 22 12:57:43 2008 From: recrusader at gmail.com (Yujie) Date: Mon, 22 Sep 2008 10:57:43 -0700 Subject: How to generate a parallel matrix with a sequential dense matrix? Message-ID: <7ff0ee010809221057p2756fb57we0a49610eb0cd0a1@mail.gmail.com> Hi, Petsc developer Now, I have a sequential dense matrix. How to get a parallel matrix based on it? thanks a lot. Regards, Yujie -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Sep 22 13:09:19 2008 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 22 Sep 2008 13:09:19 -0500 Subject: How to generate a parallel matrix with a sequential dense matrix? In-Reply-To: <7ff0ee010809221057p2756fb57we0a49610eb0cd0a1@mail.gmail.com> References: <7ff0ee010809221057p2756fb57we0a49610eb0cd0a1@mail.gmail.com> Message-ID: The right way to do this is to input the matrix using MatSetValues() in a distribute fashion. You can consult any of the tutorials, for instance KSP ex2 for an example of this. Matt On Mon, Sep 22, 2008 at 12:57 PM, Yujie wrote: > Hi, Petsc developer > > Now, I have a sequential dense matrix. How to get a parallel matrix based on > it? thanks a lot. > > Regards, > > Yujie -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From recrusader at gmail.com Mon Sep 22 13:34:21 2008 From: recrusader at gmail.com (Yujie) Date: Mon, 22 Sep 2008 11:34:21 -0700 Subject: How to generate a parallel matrix with a sequential dense matrix? In-Reply-To: References: <7ff0ee010809221057p2756fb57we0a49610eb0cd0a1@mail.gmail.com> Message-ID: <7ff0ee010809221134l25987aaaufa31614cef01bdcb@mail.gmail.com> Thank you for your reply, Matt. I have checked the tutorials. They just use specified values and MatSetValues() to make a parallel matrix. Now, the matrix I use is in a single node of the cluster. I have 'M' nodes in this cluster. I need to copy the sequential matrix to other 'M-1' nodes and then use MatSetValues() or I just use MatSetvalues() in the node where the matrix is? The latter should work, right? thanks. Yujie On Mon, Sep 22, 2008 at 11:09 AM, Matthew Knepley wrote: > The right way to do this is to input the matrix using MatSetValues() > in a distribute fashion. You can consult any of the tutorials, for instance > KSP ex2 for an example of this. > > Matt > > On Mon, Sep 22, 2008 at 12:57 PM, Yujie wrote: > > Hi, Petsc developer > > > > Now, I have a sequential dense matrix. How to get a parallel matrix based > on > > it? thanks a lot. > > > > Regards, > > > > Yujie > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Sep 22 14:04:24 2008 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 22 Sep 2008 14:04:24 -0500 Subject: How to generate a parallel matrix with a sequential dense matrix? In-Reply-To: <7ff0ee010809221134l25987aaaufa31614cef01bdcb@mail.gmail.com> References: <7ff0ee010809221057p2756fb57we0a49610eb0cd0a1@mail.gmail.com> <7ff0ee010809221134l25987aaaufa31614cef01bdcb@mail.gmail.com> Message-ID: On Mon, Sep 22, 2008 at 1:34 PM, Yujie wrote: > Thank you for your reply, Matt. I have checked the tutorials. They just use > specified values and MatSetValues() to make a parallel matrix. Now, the > matrix I use is in a single node of the cluster. I have 'M' nodes in this > cluster. I need to copy the sequential matrix to other 'M-1' nodes and then > use MatSetValues() or I just use MatSetvalues() in the node where the matrix > is? The latter should work, right? thanks. The idea is to set the values owned by a given process, on that process. You can set all the values from one place, however that would mean a lot of communication. Matt > Yujie > > On Mon, Sep 22, 2008 at 11:09 AM, Matthew Knepley wrote: >> >> The right way to do this is to input the matrix using MatSetValues() >> in a distribute fashion. You can consult any of the tutorials, for >> instance >> KSP ex2 for an example of this. >> >> Matt >> >> On Mon, Sep 22, 2008 at 12:57 PM, Yujie wrote: >> > Hi, Petsc developer >> > >> > Now, I have a sequential dense matrix. How to get a parallel matrix >> > based on >> > it? thanks a lot. >> > >> > Regards, >> > >> > Yujie >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Mon Sep 22 15:03:09 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 22 Sep 2008 15:03:09 -0500 Subject: How to generate a parallel matrix with a sequential dense matrix? In-Reply-To: <7ff0ee010809221134l25987aaaufa31614cef01bdcb@mail.gmail.com> References: <7ff0ee010809221057p2756fb57we0a49610eb0cd0a1@mail.gmail.com> <7ff0ee010809221134l25987aaaufa31614cef01bdcb@mail.gmail.com> Message-ID: <66B0B7B2-77F5-417E-8A0B-D887C24F589C@mcs.anl.gov> I would only expect good performance if you used MPI calls to send the blocks of rows of the matrix to the process they belong to and use MatGetArray() to pass into the MPI_Recv to receive the data into. Barry On Sep 22, 2008, at 1:34 PM, Yujie wrote: > Thank you for your reply, Matt. I have checked the tutorials. They > just use specified values and MatSetValues() to make a parallel > matrix. Now, the matrix I use is in a single node of the cluster. I > have 'M' nodes in this cluster. I need to copy the sequential matrix > to other 'M-1' nodes and then use MatSetValues() or I just use > MatSetvalues() in the node where the matrix is? The latter should > work, right? thanks. > > Yujie > > > On Mon, Sep 22, 2008 at 11:09 AM, Matthew Knepley > wrote: > The right way to do this is to input the matrix using MatSetValues() > in a distribute fashion. You can consult any of the tutorials, for > instance > KSP ex2 for an example of this. > > Matt > > On Mon, Sep 22, 2008 at 12:57 PM, Yujie wrote: > > Hi, Petsc developer > > > > Now, I have a sequential dense matrix. How to get a parallel > matrix based on > > it? thanks a lot. > > > > Regards, > > > > Yujie > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > From aja2111 at columbia.edu Mon Sep 22 16:44:41 2008 From: aja2111 at columbia.edu (Aron Ahmadia) Date: Mon, 22 Sep 2008 17:44:41 -0400 Subject: How to generate a parallel matrix with a sequential dense matrix? In-Reply-To: <66B0B7B2-77F5-417E-8A0B-D887C24F589C@mcs.anl.gov> References: <7ff0ee010809221057p2756fb57we0a49610eb0cd0a1@mail.gmail.com> <7ff0ee010809221134l25987aaaufa31614cef01bdcb@mail.gmail.com> <66B0B7B2-77F5-417E-8A0B-D887C24F589C@mcs.anl.gov> Message-ID: <37604ab40809221444v35005007jd7f83c3bdc60739d@mail.gmail.com> Wouldn't it be better in this case to use an MPIScatterV? ~A On Mon, Sep 22, 2008 at 4:03 PM, Barry Smith wrote: > > I would only expect good performance if you used MPI calls to send the > blocks of rows of the matrix to the process > they belong to and use MatGetArray() to pass into the MPI_Recv to receive > the data into. > > Barry > > > On Sep 22, 2008, at 1:34 PM, Yujie wrote: > > Thank you for your reply, Matt. I have checked the tutorials. They just >> use specified values and MatSetValues() to make a parallel matrix. Now, the >> matrix I use is in a single node of the cluster. I have 'M' nodes in this >> cluster. I need to copy the sequential matrix to other 'M-1' nodes and then >> use MatSetValues() or I just use MatSetvalues() in the node where the matrix >> is? The latter should work, right? thanks. >> >> Yujie >> >> >> On Mon, Sep 22, 2008 at 11:09 AM, Matthew Knepley >> wrote: >> The right way to do this is to input the matrix using MatSetValues() >> in a distribute fashion. You can consult any of the tutorials, for >> instance >> KSP ex2 for an example of this. >> >> Matt >> >> On Mon, Sep 22, 2008 at 12:57 PM, Yujie wrote: >> > Hi, Petsc developer >> > >> > Now, I have a sequential dense matrix. How to get a parallel matrix >> based on >> > it? thanks a lot. >> > >> > Regards, >> > >> > Yujie >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Sep 22 18:00:02 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 22 Sep 2008 18:00:02 -0500 Subject: How to generate a parallel matrix with a sequential dense matrix? In-Reply-To: <37604ab40809221444v35005007jd7f83c3bdc60739d@mail.gmail.com> References: <7ff0ee010809221057p2756fb57we0a49610eb0cd0a1@mail.gmail.com> <37604ab40809221444v35005007jd7f83c3bdc60739d@mail.gmail.com> Message-ID: <06F6D559-02D9-4882-8D1A-3D10B3E1CA78@mcs.anl.gov> Likely it would be better; it should not be worse. Barry On Sep 22, 2008, at 4:44 PM, Aron Ahmadia wrote: > Wouldn't it be better in this case to use an MPIScatterV? > > ~A > > On Mon, Sep 22, 2008 at 4:03 PM, Barry Smith > wrote: > > I would only expect good performance if you used MPI calls to send > the blocks of rows of the matrix to the process > they belong to and use MatGetArray() to pass into the MPI_Recv to > receive the data into. > > Barry > > > On Sep 22, 2008, at 1:34 PM, Yujie wrote: > > Thank you for your reply, Matt. I have checked the tutorials. They > just use specified values and MatSetValues() to make a parallel > matrix. Now, the matrix I use is in a single node of the cluster. I > have 'M' nodes in this cluster. I need to copy the sequential matrix > to other 'M-1' nodes and then use MatSetValues() or I just use > MatSetvalues() in the node where the matrix is? The latter should > work, right? thanks. > > Yujie > > > On Mon, Sep 22, 2008 at 11:09 AM, Matthew Knepley > wrote: > The right way to do this is to input the matrix using MatSetValues() > in a distribute fashion. You can consult any of the tutorials, for > instance > KSP ex2 for an example of this. > > Matt > > On Mon, Sep 22, 2008 at 12:57 PM, Yujie wrote: > > Hi, Petsc developer > > > > Now, I have a sequential dense matrix. How to get a parallel > matrix based on > > it? thanks a lot. > > > > Regards, > > > > Yujie > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > > > From recrusader at gmail.com Mon Sep 22 20:35:57 2008 From: recrusader at gmail.com (Yujie) Date: Mon, 22 Sep 2008 18:35:57 -0700 Subject: about KSP based on parallel dense matrix Message-ID: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> To my knowledge, PETsc doesn't provide parallel dense matrix-based solvers, such as for CG, GMRES and so on. If it is, how to deal with this problem? Thanks. Regards, Yujie -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Mon Sep 22 20:51:29 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Mon, 22 Sep 2008 22:51:29 -0300 Subject: about KSP based on parallel dense matrix In-Reply-To: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> References: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> Message-ID: Well, any iterative solver will actually work, but expect a really poor scalability :-). I believe (never used dense matrices) that you could use a direct method (PLAPACK?), but again, be prepared for long running times if your problem is (even moderately) large. On Mon, Sep 22, 2008 at 10:35 PM, Yujie wrote: > To my knowledge, PETsc doesn't provide parallel dense matrix-based solvers, > such as for CG, GMRES and so on. If it is, how to deal with this problem? > Thanks. > > Regards, > > Yujie > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From bsmith at mcs.anl.gov Mon Sep 22 21:02:11 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 22 Sep 2008 21:02:11 -0500 Subject: about KSP based on parallel dense matrix In-Reply-To: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> References: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> Message-ID: The KSP methods are INDEPENDENT of the matrix format, they can all be used just as well with dense matrices as sparse matrices. As with all linear operators the question is what are suitable preconditioners (if any) for a particular problem. If the matrix is well conditioned then -pc_type jacobi would be fine; preconditioners like ILU are probably silly for dense matrices since they are expensive. On Sep 22, 2008, at 8:35 PM, Yujie wrote: > To my knowledge, PETsc doesn't provide parallel dense matrix-based > solvers, such as for CG, GMRES and so on. If it is, how to deal with > this problem? Thanks. > > Regards, > > Yujie > From recrusader at gmail.com Mon Sep 22 21:03:03 2008 From: recrusader at gmail.com (Yujie) Date: Mon, 22 Sep 2008 19:03:03 -0700 Subject: about KSP based on parallel dense matrix In-Reply-To: References: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> Message-ID: <7ff0ee010809221903l52e83360jd0497e1307258c5d@mail.gmail.com> Dear Lisandro: Barry has tried to establish an interface for Plapack. However, there are some bugs in Plapack. Therefore, it doesn't work. I am wondering if CG in Petsc can work with parallel dense matrix. When using the same matrix, which one is faster, sequential or parallel? thanks. Regards, Yujie On Mon, Sep 22, 2008 at 6:51 PM, Lisandro Dalcin wrote: > Well, any iterative solver will actually work, but expect a really > poor scalability :-). I believe (never used dense matrices) that you > could use a direct method (PLAPACK?), but again, be prepared for long > running times if your problem is (even moderately) large. > > On Mon, Sep 22, 2008 at 10:35 PM, Yujie wrote: > > To my knowledge, PETsc doesn't provide parallel dense matrix-based > solvers, > > such as for CG, GMRES and so on. If it is, how to deal with this problem? > > Thanks. > > > > Regards, > > > > Yujie > > > > > > -- > Lisandro Dalc?n > --------------- > Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > PTLC - G?emes 3450, (3000) Santa Fe, Argentina > Tel/Fax: +54-(0)342-451.1594 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Mon Sep 22 21:43:03 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Mon, 22 Sep 2008 23:43:03 -0300 Subject: about KSP based on parallel dense matrix In-Reply-To: <7ff0ee010809221903l52e83360jd0497e1307258c5d@mail.gmail.com> References: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> <7ff0ee010809221903l52e83360jd0497e1307258c5d@mail.gmail.com> Message-ID: On Mon, Sep 22, 2008 at 11:03 PM, Yujie wrote: > Dear Lisandro: > > Barry has tried to establish an interface for Plapack. However, there are > some bugs in Plapack. Therefore, it doesn't work. Sorry, I didn't know about those Plapack issues. > I am wondering if CG in > Petsc can work with parallel dense matrix. Of course it works. In fact, any other KSP should work. As Barry said, The KSP methods are INDEPENDENT of the matrix format, try -pc_type jacobi as preconditioner. > When using the same matrix, which > one is faster, sequential or parallel? thanks. For a fixed-size matrix, you should get really good speedups iterating in parallel. Of course, that would be even better if you can generate the local rows of the matrix in each processor. If not, communicating the matrix row from the 'master' to the 'slaves' could be a real bootleneck (large data to compute at the master while slaves waiting, large data to scatter from master to slaves), If you cannot avoid dense matrices, then you should try hard to compute the local rows at the owning processor. > > On Mon, Sep 22, 2008 at 6:51 PM, Lisandro Dalcin wrote: >> >> Well, any iterative solver will actually work, but expect a really >> poor scalability :-). I believe (never used dense matrices) that you >> could use a direct method (PLAPACK?), but again, be prepared for long >> running times if your problem is (even moderately) large. >> >> On Mon, Sep 22, 2008 at 10:35 PM, Yujie wrote: >> > To my knowledge, PETsc doesn't provide parallel dense matrix-based >> > solvers, >> > such as for CG, GMRES and so on. If it is, how to deal with this >> > problem? >> > Thanks. >> > >> > Regards, >> > >> > Yujie >> > >> >> >> >> -- >> Lisandro Dalc?n >> --------------- >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >> Tel/Fax: +54-(0)342-451.1594 >> > > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From knepley at gmail.com Mon Sep 22 22:29:55 2008 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 22 Sep 2008 22:29:55 -0500 Subject: about KSP based on parallel dense matrix In-Reply-To: References: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> <7ff0ee010809221903l52e83360jd0497e1307258c5d@mail.gmail.com> Message-ID: On Mon, Sep 22, 2008 at 9:43 PM, Lisandro Dalcin wrote: > On Mon, Sep 22, 2008 at 11:03 PM, Yujie wrote: >> Dear Lisandro: >> >> Barry has tried to establish an interface for Plapack. However, there are >> some bugs in Plapack. Therefore, it doesn't work. > > Sorry, I didn't know about those Plapack issues. Neither did I, and seeing as how I use it, this is interesting. Please please please report any bugs you find, because I have been using it without problems. Matt >> I am wondering if CG in >> Petsc can work with parallel dense matrix. > > Of course it works. In fact, any other KSP should work. As Barry said, > The KSP methods are INDEPENDENT of the matrix format, try -pc_type > jacobi as preconditioner. > >> When using the same matrix, which >> one is faster, sequential or parallel? thanks. > > For a fixed-size matrix, you should get really good speedups iterating > in parallel. Of course, that would be even better if you can generate > the local rows of the matrix in each processor. If not, communicating > the matrix row from the 'master' to the 'slaves' could be a real > bootleneck (large data to compute at the master while slaves waiting, > large data to scatter from master to slaves), If you cannot avoid > dense matrices, then you should try hard to compute the local rows at > the owning processor. > > >> >> On Mon, Sep 22, 2008 at 6:51 PM, Lisandro Dalcin wrote: >>> >>> Well, any iterative solver will actually work, but expect a really >>> poor scalability :-). I believe (never used dense matrices) that you >>> could use a direct method (PLAPACK?), but again, be prepared for long >>> running times if your problem is (even moderately) large. >>> >>> On Mon, Sep 22, 2008 at 10:35 PM, Yujie wrote: >>> > To my knowledge, PETsc doesn't provide parallel dense matrix-based >>> > solvers, >>> > such as for CG, GMRES and so on. If it is, how to deal with this >>> > problem? >>> > Thanks. >>> > >>> > Regards, >>> > >>> > Yujie >>> > >>> >>> >>> >>> -- >>> Lisandro Dalc?n >>> --------------- >>> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >>> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >>> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >>> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >>> Tel/Fax: +54-(0)342-451.1594 >>> >> >> > > > > -- > Lisandro Dalc?n > --------------- > Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > PTLC - G?emes 3450, (3000) Santa Fe, Argentina > Tel/Fax: +54-(0)342-451.1594 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From recrusader at gmail.com Mon Sep 22 23:17:25 2008 From: recrusader at gmail.com (Yujie) Date: Mon, 22 Sep 2008 20:17:25 -0800 Subject: about KSP based on parallel dense matrix In-Reply-To: References: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> <7ff0ee010809221903l52e83360jd0497e1307258c5d@mail.gmail.com> Message-ID: <7ff0ee010809222117m46c836b9p4b564fedb9cf9231@mail.gmail.com> Barry wanted to do sth about Plapack. He found some bugs. I don't know details. Of course, if I find some bugs in Petsc, I will report it. I benefit a lot from it. thank you for your work in Petsc :). Regards, Yujie On Mon, Sep 22, 2008 at 7:29 PM, Matthew Knepley wrote: > On Mon, Sep 22, 2008 at 9:43 PM, Lisandro Dalcin > wrote: > > On Mon, Sep 22, 2008 at 11:03 PM, Yujie wrote: > >> Dear Lisandro: > >> > >> Barry has tried to establish an interface for Plapack. However, there > are > >> some bugs in Plapack. Therefore, it doesn't work. > > > > Sorry, I didn't know about those Plapack issues. > > Neither did I, and seeing as how I use it, this is interesting. Please > please please > report any bugs you find, because I have been using it without problems. > > Matt > > >> I am wondering if CG in > >> Petsc can work with parallel dense matrix. > > > > Of course it works. In fact, any other KSP should work. As Barry said, > > The KSP methods are INDEPENDENT of the matrix format, try -pc_type > > jacobi as preconditioner. > > > >> When using the same matrix, which > >> one is faster, sequential or parallel? thanks. > > > > For a fixed-size matrix, you should get really good speedups iterating > > in parallel. Of course, that would be even better if you can generate > > the local rows of the matrix in each processor. If not, communicating > > the matrix row from the 'master' to the 'slaves' could be a real > > bootleneck (large data to compute at the master while slaves waiting, > > large data to scatter from master to slaves), If you cannot avoid > > dense matrices, then you should try hard to compute the local rows at > > the owning processor. > > > > > >> > >> On Mon, Sep 22, 2008 at 6:51 PM, Lisandro Dalcin > wrote: > >>> > >>> Well, any iterative solver will actually work, but expect a really > >>> poor scalability :-). I believe (never used dense matrices) that you > >>> could use a direct method (PLAPACK?), but again, be prepared for long > >>> running times if your problem is (even moderately) large. > >>> > >>> On Mon, Sep 22, 2008 at 10:35 PM, Yujie wrote: > >>> > To my knowledge, PETsc doesn't provide parallel dense matrix-based > >>> > solvers, > >>> > such as for CG, GMRES and so on. If it is, how to deal with this > >>> > problem? > >>> > Thanks. > >>> > > >>> > Regards, > >>> > > >>> > Yujie > >>> > > >>> > >>> > >>> > >>> -- > >>> Lisandro Dalc?n > >>> --------------- > >>> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > >>> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > >>> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > >>> PTLC - G?emes 3450, (3000) Santa Fe, Argentina > >>> Tel/Fax: +54-(0)342-451.1594 > >>> > >> > >> > > > > > > > > -- > > Lisandro Dalc?n > > --------------- > > Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) > > Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) > > Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) > > PTLC - G?emes 3450, (3000) Santa Fe, Argentina > > Tel/Fax: +54-(0)342-451.1594 > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rmills at climate.ornl.gov Mon Sep 22 23:20:35 2008 From: rmills at climate.ornl.gov (Richard Tran Mills) Date: Tue, 23 Sep 2008 00:20:35 -0400 Subject: about KSP based on parallel dense matrix In-Reply-To: References: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> <7ff0ee010809221903l52e83360jd0497e1307258c5d@mail.gmail.com> Message-ID: <48D86E93.9040604@climate.ornl.gov> Matt, I use it too. Last time I checked, though, it seemed to be broken in petsc-dev: I could no longer build it using --download-plapack with configure.py. (I think the problem is that no one finished migrating it to the new direct solver interface -- I haven't had time to investigate, though.) It was working for me in 2.3.3, though. I assume Yujie is using the release version of PETSc? --Richard Matthew Knepley wrote: > On Mon, Sep 22, 2008 at 9:43 PM, Lisandro Dalcin wrote: >> On Mon, Sep 22, 2008 at 11:03 PM, Yujie wrote: >>> Dear Lisandro: >>> >>> Barry has tried to establish an interface for Plapack. However, there are >>> some bugs in Plapack. Therefore, it doesn't work. >> Sorry, I didn't know about those Plapack issues. > > Neither did I, and seeing as how I use it, this is interesting. Please > please please > report any bugs you find, because I have been using it without problems. > > Matt > >>> I am wondering if CG in >>> Petsc can work with parallel dense matrix. >> Of course it works. In fact, any other KSP should work. As Barry said, >> The KSP methods are INDEPENDENT of the matrix format, try -pc_type >> jacobi as preconditioner. >> >>> When using the same matrix, which >>> one is faster, sequential or parallel? thanks. >> For a fixed-size matrix, you should get really good speedups iterating >> in parallel. Of course, that would be even better if you can generate >> the local rows of the matrix in each processor. If not, communicating >> the matrix row from the 'master' to the 'slaves' could be a real >> bootleneck (large data to compute at the master while slaves waiting, >> large data to scatter from master to slaves), If you cannot avoid >> dense matrices, then you should try hard to compute the local rows at >> the owning processor. >> >> >>> On Mon, Sep 22, 2008 at 6:51 PM, Lisandro Dalcin wrote: >>>> Well, any iterative solver will actually work, but expect a really >>>> poor scalability :-). I believe (never used dense matrices) that you >>>> could use a direct method (PLAPACK?), but again, be prepared for long >>>> running times if your problem is (even moderately) large. >>>> >>>> On Mon, Sep 22, 2008 at 10:35 PM, Yujie wrote: >>>>> To my knowledge, PETsc doesn't provide parallel dense matrix-based >>>>> solvers, >>>>> such as for CG, GMRES and so on. If it is, how to deal with this >>>>> problem? >>>>> Thanks. >>>>> >>>>> Regards, >>>>> >>>>> Yujie >>>>> >>>> >>>> >>>> -- >>>> Lisandro Dalc?n >>>> --------------- >>>> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >>>> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >>>> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >>>> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >>>> Tel/Fax: +54-(0)342-451.1594 >>>> >>> >> >> >> -- >> Lisandro Dalc?n >> --------------- >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >> Tel/Fax: +54-(0)342-451.1594 >> >> > > > From hzhang at mcs.anl.gov Tue Sep 23 10:26:08 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 23 Sep 2008 10:26:08 -0500 (CDT) Subject: about KSP based on parallel dense matrix In-Reply-To: <7ff0ee010809221903l52e83360jd0497e1307258c5d@mail.gmail.com> References: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> <7ff0ee010809221903l52e83360jd0497e1307258c5d@mail.gmail.com> Message-ID: Yujie, > > Barry has tried to establish an interface for Plapack. However, there are > some bugs in Plapack. Therefore, it doesn't work. I am wondering if CG in The Plapack interface in the realeased petsc-2.3.3 should work fine. Additional work is needed to make it work under reorganization in petsc-dev, which I'm working on. Note, Plapack implements LU precontioner. Hong > Petsc can work with parallel dense matrix. When using the same matrix, which > one is faster, sequential or parallel? thanks. > > Regards, > > Yujie > > On Mon, Sep 22, 2008 at 6:51 PM, Lisandro Dalcin wrote: > >> Well, any iterative solver will actually work, but expect a really >> poor scalability :-). I believe (never used dense matrices) that you >> could use a direct method (PLAPACK?), but again, be prepared for long >> running times if your problem is (even moderately) large. >> >> On Mon, Sep 22, 2008 at 10:35 PM, Yujie wrote: >>> To my knowledge, PETsc doesn't provide parallel dense matrix-based >> solvers, >>> such as for CG, GMRES and so on. If it is, how to deal with this problem? >>> Thanks. >>> >>> Regards, >>> >>> Yujie >>> >> >> >> >> -- >> Lisandro Dalc?n >> --------------- >> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >> Tel/Fax: +54-(0)342-451.1594 >> >> > From mafunk at nmsu.edu Tue Sep 23 10:45:44 2008 From: mafunk at nmsu.edu (Matt Funk) Date: Tue, 23 Sep 2008 09:45:44 -0600 Subject: ARPACK interface? Message-ID: <200809230945.44377.mafunk@nmsu.edu> Hi, i was wondering if there are any plans to have a petsc interface to arpack? Has this been addressed before? thanks matt From Andrew.Barker at Colorado.EDU Tue Sep 23 11:05:57 2008 From: Andrew.Barker at Colorado.EDU (Andrew T Barker) Date: Tue, 23 Sep 2008 10:05:57 -0600 (MDT) Subject: negative indices in VecSetValuesLocal Message-ID: <20080923100557.AEM11040@batman.int.colorado.edu> Upgrading to Petsc 2.3.3 from 2.3.2, it seems that where before VecSetValuesLocal would ignore negative numbers in the set of vector indices, now it complains "Argument out of range; Out of range index value -1 cannot be negative". I've checked the LocalToGlobalMapping and everything is the same, it seems the Petsc version is the difference. Is there a suggested workaround? I'm afraid going through the array one by one will be slow. Thanks, Andrew From recrusader at gmail.com Tue Sep 23 11:09:08 2008 From: recrusader at gmail.com (Yujie) Date: Tue, 23 Sep 2008 09:09:08 -0700 Subject: about KSP based on parallel dense matrix In-Reply-To: <48D86E93.9040604@climate.ornl.gov> References: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> <7ff0ee010809221903l52e83360jd0497e1307258c5d@mail.gmail.com> <48D86E93.9040604@climate.ornl.gov> Message-ID: <7ff0ee010809230909k409aaeb4t2f7981a2092accdb@mail.gmail.com> Yes, I use PETSc 2.3.3. In fact, I didn't use Plapack because Barry told me it didn't work :). I will try it. thanks. On Mon, Sep 22, 2008 at 9:20 PM, Richard Tran Mills wrote: > Matt, > > I use it too. Last time I checked, though, it seemed to be broken in > petsc-dev: I could no longer build it using --download-plapack with > configure.py. (I think the problem is that no one finished migrating it to > the new direct solver interface -- I haven't had time to investigate, > though.) > > It was working for me in 2.3.3, though. I assume Yujie is using the > release version of PETSc? > > --Richard > > Matthew Knepley wrote: > >> On Mon, Sep 22, 2008 at 9:43 PM, Lisandro Dalcin >> wrote: >> >>> On Mon, Sep 22, 2008 at 11:03 PM, Yujie wrote: >>> >>>> Dear Lisandro: >>>> >>>> Barry has tried to establish an interface for Plapack. However, there >>>> are >>>> some bugs in Plapack. Therefore, it doesn't work. >>>> >>> Sorry, I didn't know about those Plapack issues. >>> >> >> Neither did I, and seeing as how I use it, this is interesting. Please >> please please >> report any bugs you find, because I have been using it without problems. >> >> Matt >> >> I am wondering if CG in >>>> Petsc can work with parallel dense matrix. >>>> >>> Of course it works. In fact, any other KSP should work. As Barry said, >>> The KSP methods are INDEPENDENT of the matrix format, try -pc_type >>> jacobi as preconditioner. >>> >>> When using the same matrix, which >>>> one is faster, sequential or parallel? thanks. >>>> >>> For a fixed-size matrix, you should get really good speedups iterating >>> in parallel. Of course, that would be even better if you can generate >>> the local rows of the matrix in each processor. If not, communicating >>> the matrix row from the 'master' to the 'slaves' could be a real >>> bootleneck (large data to compute at the master while slaves waiting, >>> large data to scatter from master to slaves), If you cannot avoid >>> dense matrices, then you should try hard to compute the local rows at >>> the owning processor. >>> >>> >>> On Mon, Sep 22, 2008 at 6:51 PM, Lisandro Dalcin >>>> wrote: >>>> >>>>> Well, any iterative solver will actually work, but expect a really >>>>> poor scalability :-). I believe (never used dense matrices) that you >>>>> could use a direct method (PLAPACK?), but again, be prepared for long >>>>> running times if your problem is (even moderately) large. >>>>> >>>>> On Mon, Sep 22, 2008 at 10:35 PM, Yujie wrote: >>>>> >>>>>> To my knowledge, PETsc doesn't provide parallel dense matrix-based >>>>>> solvers, >>>>>> such as for CG, GMRES and so on. If it is, how to deal with this >>>>>> problem? >>>>>> Thanks. >>>>>> >>>>>> Regards, >>>>>> >>>>>> Yujie >>>>>> >>>>>> >>>>> >>>>> -- >>>>> Lisandro Dalc?n >>>>> --------------- >>>>> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >>>>> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >>>>> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >>>>> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >>>>> Tel/Fax: +54-(0)342-451.1594 >>>>> >>>>> >>>> >>> >>> -- >>> Lisandro Dalc?n >>> --------------- >>> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >>> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >>> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >>> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >>> Tel/Fax: +54-(0)342-451.1594 >>> >>> >>> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From recrusader at gmail.com Tue Sep 23 11:09:38 2008 From: recrusader at gmail.com (Yujie) Date: Tue, 23 Sep 2008 09:09:38 -0700 Subject: about KSP based on parallel dense matrix In-Reply-To: References: <7ff0ee010809221835v602275c5id45234e86e1bd743@mail.gmail.com> <7ff0ee010809221903l52e83360jd0497e1307258c5d@mail.gmail.com> Message-ID: <7ff0ee010809230909s5cbaa9cfp40752e48db8d2242@mail.gmail.com> thank you very much, Hong. On Tue, Sep 23, 2008 at 8:26 AM, Hong Zhang wrote: > > Yujie, > > >> Barry has tried to establish an interface for Plapack. However, there are >> some bugs in Plapack. Therefore, it doesn't work. I am wondering if CG in >> > > The Plapack interface in the realeased petsc-2.3.3 should work fine. > Additional work is needed to make it work under reorganization > in petsc-dev, which I'm working on. > > Note, Plapack implements LU precontioner. > > Hong > > Petsc can work with parallel dense matrix. When using the same matrix, >> which >> one is faster, sequential or parallel? thanks. >> >> Regards, >> >> Yujie >> >> On Mon, Sep 22, 2008 at 6:51 PM, Lisandro Dalcin >> wrote: >> >> Well, any iterative solver will actually work, but expect a really >>> poor scalability :-). I believe (never used dense matrices) that you >>> could use a direct method (PLAPACK?), but again, be prepared for long >>> running times if your problem is (even moderately) large. >>> >>> On Mon, Sep 22, 2008 at 10:35 PM, Yujie wrote: >>> >>>> To my knowledge, PETsc doesn't provide parallel dense matrix-based >>>> >>> solvers, >>> >>>> such as for CG, GMRES and so on. If it is, how to deal with this >>>> problem? >>>> Thanks. >>>> >>>> Regards, >>>> >>>> Yujie >>>> >>>> >>> >>> >>> -- >>> Lisandro Dalc?n >>> --------------- >>> Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) >>> Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) >>> Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) >>> PTLC - G?emes 3450, (3000) Santa Fe, Argentina >>> Tel/Fax: +54-(0)342-451.1594 >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Sep 23 11:10:23 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 23 Sep 2008 11:10:23 -0500 (CDT) Subject: negative indices in VecSetValuesLocal In-Reply-To: <20080923100557.AEM11040@batman.int.colorado.edu> References: <20080923100557.AEM11040@batman.int.colorado.edu> Message-ID: Use: VecSetOption(vec,VEC_IGNORE_NEGATIVE_INDICES) Satish On Tue, 23 Sep 2008, Andrew T Barker wrote: > > Upgrading to Petsc 2.3.3 from 2.3.2, it seems that where before VecSetValuesLocal would ignore negative numbers in the set of vector indices, now it complains "Argument out of range; Out of range index value -1 cannot be negative". I've checked the LocalToGlobalMapping and everything is the same, it seems the Petsc version is the difference. Is there a suggested workaround? I'm afraid going through the array one by one will be slow. > > Thanks, > > Andrew > > From knepley at gmail.com Tue Sep 23 11:10:55 2008 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 23 Sep 2008 11:10:55 -0500 Subject: ARPACK interface? In-Reply-To: <200809230945.44377.mafunk@nmsu.edu> References: <200809230945.44377.mafunk@nmsu.edu> Message-ID: On Tue, Sep 23, 2008 at 10:45 AM, Matt Funk wrote: > Hi, > > i was wondering if there are any plans to have a petsc interface to arpack? > Has this been addressed before? For eigenvalue problems, we recommend SLEPc (and BLOPEX for specialized problems). Thanks, Matt > thanks > matt -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From balay at mcs.anl.gov Tue Sep 23 11:13:32 2008 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 23 Sep 2008 11:13:32 -0500 (CDT) Subject: ARPACK interface? In-Reply-To: References: <200809230945.44377.mafunk@nmsu.edu> Message-ID: On Tue, 23 Sep 2008, Matthew Knepley wrote: > On Tue, Sep 23, 2008 at 10:45 AM, Matt Funk wrote: > > Hi, > > > > i was wondering if there are any plans to have a petsc interface to arpack? > > Has this been addressed before? > > For eigenvalue problems, we recommend SLEPc (and BLOPEX for specialized > problems). And I believe SLEPc provides interface to ARPACK. Satish From hzhang at mcs.anl.gov Tue Sep 23 11:21:54 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 23 Sep 2008 11:21:54 -0500 (CDT) Subject: ARPACK interface? In-Reply-To: References: <200809230945.44377.mafunk@nmsu.edu> Message-ID: > > And I believe SLEPc provides interface to ARPACK. Yes, SLEPc-ARPACK is very easy to use. See http://acts.nersc.gov/slepc/index.html. Hong From mafunk at nmsu.edu Tue Sep 23 11:50:02 2008 From: mafunk at nmsu.edu (Matt Funk) Date: Tue, 23 Sep 2008 10:50:02 -0600 Subject: ARPACK interface? In-Reply-To: References: <200809230945.44377.mafunk@nmsu.edu> Message-ID: <200809231050.02584.mafunk@nmsu.edu> Thanks i'll try that out. matt On Tuesday 23 September 2008, Hong Zhang wrote: > > And I believe SLEPc provides interface to ARPACK. > > Yes, SLEPc-ARPACK is very easy to use. > See http://acts.nersc.gov/slepc/index.html. > > Hong From ntardieu at giref.ulaval.ca Tue Sep 23 21:38:31 2008 From: ntardieu at giref.ulaval.ca (ntardieu at giref.ulaval.ca) Date: Tue, 23 Sep 2008 22:38:31 -0400 (EDT) Subject: Extend preallocation Message-ID: <61733.24.203.189.32.1222223911.squirrel@interne.giref.ulaval.ca> Dear Petsc users, I would like to compute the sparse projection matrix Q=Id - Pt*A*P. Pt*A*P is very well computed by MatMatMult ; then I would like to use MatShift in order to complete the computation. Unfortunately, Pt*A*P has lots of zeros on its diagonal, thus MatShift is very slow due to inapropriate preallocation. Since the initial preallocation of Pt*A*P is very good, I would to know if there is a method allowing to get the allocation data structure and to extend it in order to preallocate the diagonal terms. Best regards, Nicolas From adrian at cray.com Tue Sep 23 23:00:41 2008 From: adrian at cray.com (Adrian Tate) Date: Tue, 23 Sep 2008 23:00:41 -0500 Subject: Extend preallocation Message-ID: <925346A443D4E340BEB20248BAFCDBDF6CB635@CFEVS1-IP.americas.cray.com> ----- Original Message ----- From: owner-petsc-users at mcs.anl.gov To: petsc-users at mcs.anl.gov Sent: Tue Sep 23 21:38:31 2008 Subject: Extend preallocation Dear Petsc users, I would like to compute the sparse projection matrix Q=Id - Pt*A*P. Pt*A*P is very well computed by MatMatMult ; then I would like to use MatShift in order to complete the computation. Unfortunately, Pt*A*P has lots of zeros on its diagonal, thus MatShift is very slow due to inapropriate preallocation. Since the initial preallocation of Pt*A*P is very good, I would to know if there is a method allowing to get the allocation data structure and to extend it in order to preallocate the diagonal terms. Best regards, Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Sep 24 09:06:28 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 24 Sep 2008 09:06:28 -0500 Subject: Extend preallocation In-Reply-To: <61733.24.203.189.32.1222223911.squirrel@interne.giref.ulaval.ca> References: <61733.24.203.189.32.1222223911.squirrel@interne.giref.ulaval.ca> Message-ID: Rather than using MatMatMult(), you may want to use MatPtAPSymbolic()/MatPtAPNumeric(). Rather than hacking the MatMatMult() codes I think it would be better to have an efficient MatShift() (also MatDiagonalSet()). Note that the _MatOps table has a place holder for shift() but it is not used by the AIJ/BAIJ/SBAIJ implementations, instead the default is used that calls MatSetValues() for each diagonal entry. An efficient MatShift() could be written for SeqAIJ (also SeqBAIJ/SeqSBAIJ) that handled any data movement for ALL diagonal insertions needed at the same time. Barry On Sep 23, 2008, at 9:38 PM, ntardieu at giref.ulaval.ca wrote: > Dear Petsc users, > > I would like to compute the sparse projection matrix Q=Id - Pt*A*P. > > Pt*A*P is very well computed by MatMatMult ; then I would like to use > MatShift in order to complete the computation. > Unfortunately, Pt*A*P has lots of zeros on its diagonal, thus > MatShift is > very slow due to inapropriate preallocation. > > Since the initial preallocation of Pt*A*P is very good, I would to > know if > there is a method allowing to get the allocation data structure and to > extend it in order to preallocate the diagonal terms. > > Best regards, > > Nicolas > > > From etienne.perchat at transvalor.com Wed Sep 24 11:21:23 2008 From: etienne.perchat at transvalor.com (Etienne PERCHAT) Date: Wed, 24 Sep 2008 18:21:23 +0200 Subject: Non repetability issue and difference between 2.3.0 and 2.3.3 Message-ID: <9113A52E1096EB41B1F88DD94C4369D5353CA7@EXCHSRV.transvalor.com> Dear Petsc users, I come again with my comparisons between v2.3.0 and v2.3.3p8. I face a non repeatability issue with v2.3.3 that I didn't have with v2.3.0. I have read the exchanges made in March on a related subject but in my case it is at the first linear system solution that two successive runs differ. It happens when the number of processors used is greater than 2, even on a standard PC. I am solving MPIBAIJ symmetric systems with the Conjugate Residual method preconditioned ILU(1) and Block Jacobi between subdomains. This system is the results of a FE assembly on an unstructured mesh. I made all the runs using -log_summary and -ksp_truemonitor. Starting with the same initial matrix and RHS, each run using 2.3.3p8 provides slightly different results while we obtain exactly the same solution with v2.3.0. With Petsc 2.3.3p8: Run1: Iteration= 68 residual= 3.19515221e+000 tolerance= 5.13305158e+000 0 Run2: Iteration= 68 residual= 3.19588481e+000 tolerance= 5.13305158e+000 0 Run3: Iteration= 68 residual= 3.19384417e+000 tolerance= 5.13305158e+000 0 With Petsc 2.3.0: Run1: Iteration= 68 residual= 3.19369843e+000 tolerance= 5.13305158e+000 0 Run2: Iteration= 68 residual= 3.19369843e+000 tolerance= 5.13305158e+000 0 If I made a 4proc run with a mesh partitioning such that any node could be located on more than 2 proc. I did not face the problem. I first thought about a MPI problem related to the order in which messages are received and then summed. But it would have been exactly the same with 2.3.0 ? Any tips/ideas ? Thanks by advance. Best regards, Etienne Perchat -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Sep 24 12:14:32 2008 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 24 Sep 2008 12:14:32 -0500 Subject: Non repetability issue and difference between 2.3.0 and 2.3.3 In-Reply-To: <9113A52E1096EB41B1F88DD94C4369D5353CA7@EXCHSRV.transvalor.com> References: <9113A52E1096EB41B1F88DD94C4369D5353CA7@EXCHSRV.transvalor.com> Message-ID: On Wed, Sep 24, 2008 at 11:21 AM, Etienne PERCHAT wrote: > Dear Petsc users, > > > > I come again with my comparisons between v2.3.0 and v2.3.3p8. > > > > I face a non repeatability issue with v2.3.3 that I didn't have with v2.3.0. > > I have read the exchanges made in March on a related subject but in my case > it is at the first linear system solution that two successive runs differ. > > > > > > It happens when the number of processors used is greater than 2, even on a > standard PC. > > I am solving MPIBAIJ symmetric systems with the Conjugate Residual method > preconditioned ILU(1) and Block Jacobi between subdomains. > > This system is the results of a FE assembly on an unstructured mesh. > > > > I made all the runs using -log_summary and -ksp_truemonitor. > > > > Starting with the same initial matrix and RHS, each run using 2.3.3p8 > provides slightly different results while we obtain exactly the same > solution with v2.3.0. > > > > With Petsc 2.3.3p8: > > > > Run1: Iteration= 68 residual= 3.19515221e+000 tolerance= > 5.13305158e+000 0 > > Run2: Iteration= 68 residual= 3.19588481e+000 tolerance= > 5.13305158e+000 0 > > Run3: Iteration= 68 residual= 3.19384417e+000 tolerance= > 5.13305158e+000 0 > > > > With Petsc 2.3.0: > > > > Run1: Iteration= 68 residual= 3.19369843e+000 tolerance= > 5.13305158e+000 0 > > Run2: Iteration= 68 residual= 3.19369843e+000 tolerance= > 5.13305158e+000 0 > > > > If I made a 4proc run with a mesh partitioning such that any node could be > located on more than 2 proc. I did not face the problem. It is not clear whether you have verified that on different runs, the partitioning is exactly the same. Matt > I first thought about a MPI problem related to the order in which messages > are received and then summed. > > But it would have been exactly the same with 2.3.0 ? > > > > Any tips/ideas ? > > > > Thanks by advance. > > Best regards, > > > > Etienne Perchat -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From etienne.perchat at transvalor.com Thu Sep 25 05:09:42 2008 From: etienne.perchat at transvalor.com (Etienne PERCHAT) Date: Thu, 25 Sep 2008 12:09:42 +0200 Subject: Non repetability issue and difference between 2.3.0 and 2.3.3 Message-ID: <9113A52E1096EB41B1F88DD94C4369D5353CE0@EXCHSRV.transvalor.com> Hi Matt, I am sure that the partitioning is exactly the same: I have an external tool that partitions the mesh before launching the FE code. So for all the runs the mesh partitions has been created only once and then reused. For the case where I wanted every ghost node to be shared by two and only two processors, I used simple geometries like rings or bars with structured meshes. Once again the partitions have been created once and then reused. The initial residuals and the initial matrix are exactly the same. I have added some lines in my code: After calling MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY); I made a Matrix vector product between A and an unity vector. Then I've computed the norm of the resulting vector. You will see below the results for 4 linear system solves (two with 2.3.0 and two with 2.3.3p8) Mainly: With all runs : 1/ the results of the matrix * unity vector product are the same: 6838.31173987650 2/ the Initial Residual also : 1.50972105381228e+006 3/ At iteration 40 all the runs provides exactly the same residual: Iteration= 40 residual= 2.64670054e+003 tolerance= 3.01944211e+000 3/ with 2.3.0 the final residual is always the same : 3.19392726797939e+000 4/ with 2.3.3p8 the final residual vary after iteration 40. Some statistics made with 12 successive runs : We obtained 5 times 3.19515221050523, two times 3.19369843187027, three times 3.19373947848208e and two others for the two lasts. RUN1: 3.19515221050523e+000 RUN2: 3.19515221050523e+000 RUN3: 3.19369843187027e+000 RUN4: 3.19588480582213e+000 RUN5: 3.19515221050523e+000 RUN6: 3.19373947848208e+000 RUN7: 3.19515221050523e+000 RUN8: 3.19384417350916e+000 RUN9: 3.19515221050523e+000 RUN10: 3.19373947848208e+000 RUN11: 3.19369843187027e+000 RUN12: 3.19373947848208e+000 So same initial residual, same results for the matrix * unity vector product, same residual at iteration 40. I always used the options: OptionTable: -ksp_truemonitor OptionTable: -log_summary Any ideas will be very welcome, don't hesitate if you need additional tests. It sound, perhaps, reuse of a buffer that has not been properly released ? Best regards, Etienne ------------------------------------------------------------------------ With 2.3.0: Using Petsc Release Version 2.3.0, Patch 44, April, 26, 2005 RUN1: Norm A*One = 6838.31173987650 * Resolution method : Preconditionned Conjugate Residual * Preconditionner : BJACOBI with ILU, Blocks of 1 * * Initial Residual : 1.50972105381228e+006 Iteration= 1 residual= 9.59236416e+004 tolerance= 7.54860527e-002 Iteration= 2 residual= 8.46044988e+004 tolerance= 1.50972105e-001 Iteration= 66 residual= 3.73014307e+001 tolerance= 4.98207948e+000 Iteration= 67 residual= 3.75579067e+001 tolerance= 5.05756553e+000 Iteration= 68 residual= 3.19392727e+000 tolerance= 5.13305158e+000 * * Number of iterations : 68 * Convergency code : 3 * Final Residual Norm : 3.19392726797939e+000 * PETSC : Resolution time : 1.000389 seconds RUN2: Norme A*Un = 6838.31173987650 * Resolution method : Preconditionned Conjugate Residual * Preconditionner : BJACOBI with ILU, Blocks of 1 * * Initial Residual : 1.50972105381228e+006 Iteration= 1 residual= 9.59236416e+004 tolerance= 7.54860527e-002 Iteration= 2 residual= 8.46044988e+004 tolerance= 1.50972105e-001 Iteration= 10 residual= 2.73382943e+004 tolerance= 7.54860527e-001 Iteration= 20 residual= 7.27122933e+003 tolerance= 1.50972105e+000 Iteration= 30 residual= 8.42209039e+003 tolerance= 2.26458158e+000 Iteration= 40 residual= 2.64670054e+003 tolerance= 3.01944211e+000 Iteration= 50 residual= 3.17446784e+002 tolerance= 3.77430263e+000 Iteration= 60 residual= 3.53234217e+001 tolerance= 4.52916316e+000 Iteration= 66 residual= 3.73014307e+001 tolerance= 4.98207948e+000 Iteration= 67 residual= 3.75579067e+001 tolerance= 5.05756553e+000 Iteration= 68 residual= 3.19392727e+000 tolerance= 5.13305158e+000 * * Number of iterations : 68 * Convergency code : 3 * Final Residual Norm : 3.19392726797939e+000 * PETSC : Resolution time : 0.888913 seconds ******************************************************************************************************************************************************** WITH 2.3.3p8: Using Petsc Release Version 2.3.3, Patch 8, Fri Nov 16 17:03:40 CST 2007 HG revision: 414581156e67e55c761739b0deb119f7590d0f4b RUN1: Norme A*Un = 6838.31173987650 * Resolution method : Preconditionned Conjugate Residual * Preconditionner : BJACOBI with ILU, Blocks of 1 * * Initial Residual : 1.50972105381228e+006 Iteration= 1 residual= 9.59236416e+004 tolerance= 7.54860527e-002 Iteration= 2 residual= 8.46044988e+004 tolerance= 1.50972105e-001 Iteration= 10 residual= 2.73382943e+004 tolerance= 7.54860527e-001 Iteration= 20 residual= 7.27122933e+003 tolerance= 1.50972105e+000 Iteration= 30 residual= 8.42209039e+003 tolerance= 2.26458158e+000 Iteration= 40 residual= 2.64670054e+003 tolerance= 3.01944211e+000 Iteration= 50 residual= 3.17446756e+002 tolerance= 3.77430263e+000 Iteration= 60 residual= 3.53234489e+001 tolerance= 4.52916316e+000 Iteration= 65 residual= 7.12874932e+000 tolerance= 4.90659342e+000 Iteration= 66 residual= 3.72396571e+001 tolerance= 4.98207948e+000 Iteration= 67 residual= 3.75096723e+001 tolerance= 5.05756553e+000 Iteration= 68 residual= 3.19515221e+000 tolerance= 5.13305158e+000 * * Number of iterations : 68 * Convergency code : 3 * Final Residual Norm : 3.19515221050523e+000 * PETSC : Resolution time : 0.928915 seconds RUN2: Norme A*Un = 6838.31173987650 * Resolution method : Preconditionned Conjugate Residual * Preconditionner : BJACOBI with ILU, Blocks of 1 * * Initial Residual : 1.50972105381228e+006 Iteration= 1 residual= 9.59236416e+004 tolerance= 7.54860527e-002 Iteration= 2 residual= 8.46044988e+004 tolerance= 1.50972105e-001 Iteration= 10 residual= 2.73382943e+004 tolerance= 7.54860527e-001 Iteration= 20 residual= 7.27122933e+003 tolerance= 1.50972105e+000 Iteration= 30 residual= 8.42209039e+003 tolerance= 2.26458158e+000 Iteration= 40 residual= 2.64670054e+003 tolerance= 3.01944211e+000 Iteration= 50 residual= 3.17446774e+002 tolerance= 3.77430263e+000 Iteration= 60 residual= 3.53233608e+001 tolerance= 4.52916316e+000 Iteration= 65 residual= 7.12937602e+000 tolerance= 4.90659342e+000 Iteration= 66 residual= 3.72832632e+001 tolerance= 4.98207948e+000 Iteration= 67 residual= 3.75447170e+001 tolerance= 5.05756553e+000 Iteration= 68 residual= 3.19369843e+000 tolerance= 5.13305158e+000 * * Number of iterations : 68 * Convergency code : 3 * Final Residual Norm : 3.19369843187027e+000 * PETSC : Resolution time : 0.872702 seconds Etienne -----Message d'origine----- De?: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] De la part de Matthew Knepley Envoy??: mercredi 24 septembre 2008 19:15 ??: petsc-users at mcs.anl.gov Objet?: Re: Non repetability issue and difference between 2.3.0 and 2.3.3 On Wed, Sep 24, 2008 at 11:21 AM, Etienne PERCHAT wrote: > Dear Petsc users, > > > > I come again with my comparisons between v2.3.0 and v2.3.3p8. > > > > I face a non repeatability issue with v2.3.3 that I didn't have with v2.3.0. > > I have read the exchanges made in March on a related subject but in my case > it is at the first linear system solution that two successive runs differ. > > > > > > It happens when the number of processors used is greater than 2, even on a > standard PC. > > I am solving MPIBAIJ symmetric systems with the Conjugate Residual method > preconditioned ILU(1) and Block Jacobi between subdomains. > > This system is the results of a FE assembly on an unstructured mesh. > > > > I made all the runs using -log_summary and -ksp_truemonitor. > > > > Starting with the same initial matrix and RHS, each run using 2.3.3p8 > provides slightly different results while we obtain exactly the same > solution with v2.3.0. > > > > With Petsc 2.3.3p8: > > > > Run1: Iteration= 68 residual= 3.19515221e+000 tolerance= > 5.13305158e+000 0 > > Run2: Iteration= 68 residual= 3.19588481e+000 tolerance= > 5.13305158e+000 0 > > Run3: Iteration= 68 residual= 3.19384417e+000 tolerance= > 5.13305158e+000 0 > > > > With Petsc 2.3.0: > > > > Run1: Iteration= 68 residual= 3.19369843e+000 tolerance= > 5.13305158e+000 0 > > Run2: Iteration= 68 residual= 3.19369843e+000 tolerance= > 5.13305158e+000 0 > > > > If I made a 4proc run with a mesh partitioning such that any node could be > located on more than 2 proc. I did not face the problem. It is not clear whether you have verified that on different runs, the partitioning is exactly the same. Matt > I first thought about a MPI problem related to the order in which messages > are received and then summed. > > But it would have been exactly the same with 2.3.0 ? > > > > Any tips/ideas ? > > > > Thanks by advance. > > Best regards, > > > > Etienne Perchat -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From knepley at gmail.com Thu Sep 25 06:58:48 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 25 Sep 2008 06:58:48 -0500 Subject: Non repetability issue and difference between 2.3.0 and 2.3.3 In-Reply-To: <9113A52E1096EB41B1F88DD94C4369D5353CE0@EXCHSRV.transvalor.com> References: <9113A52E1096EB41B1F88DD94C4369D5353CE0@EXCHSRV.transvalor.com> Message-ID: On Thu, Sep 25, 2008 at 5:09 AM, Etienne PERCHAT wrote: > Hi Matt, > > I am sure that the partitioning is exactly the same: > I have an external tool that partitions the mesh before launching the FE code. So for all the runs the mesh partitions has been created only once and then reused. > > For the case where I wanted every ghost node to be shared by two and only two processors, I used simple geometries like rings or bars with structured meshes. Once again the partitions have been created once and then reused. > > The initial residuals and the initial matrix are exactly the same. > > I have added some lines in my code: > After calling MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY); > I made a Matrix vector product between A and an unity vector. Then I've computed the norm of the resulting vector. You will see below the results for 4 linear system solves (two with 2.3.0 and two with 2.3.3p8) > > Mainly: > With all runs : > 1/ the results of the matrix * unity vector product are the same: 6838.31173987650 > > 2/ the Initial Residual also : 1.50972105381228e+006 > > 3/ At iteration 40 all the runs provides exactly the same residual: > Iteration= 40 residual= 2.64670054e+003 tolerance= 3.01944211e+000 > > 3/ with 2.3.0 the final residual is always the same : 3.19392726797939e+000 > > 4/ with 2.3.3p8 the final residual vary after iteration 40. The problem here is that we run ever week a collection of regression test, in parallel, that cover 40+ configurations of OS, compilers, algorithms that we check each night and we have never seen this behavior. So, in order to investigate further, can you 1) Send us the matrix and rhs in Petsc binary format 2) Run this problem with GMRES instead Thanks, Matt > Some statistics made with 12 successive runs : > > We obtained 5 times 3.19515221050523, two times 3.19369843187027, three times 3.19373947848208e and two others for the two lasts. > > RUN1: 3.19515221050523e+000 > RUN2: 3.19515221050523e+000 > RUN3: 3.19369843187027e+000 > RUN4: 3.19588480582213e+000 > RUN5: 3.19515221050523e+000 > RUN6: 3.19373947848208e+000 > RUN7: 3.19515221050523e+000 > RUN8: 3.19384417350916e+000 > RUN9: 3.19515221050523e+000 > RUN10: 3.19373947848208e+000 > RUN11: 3.19369843187027e+000 > RUN12: 3.19373947848208e+000 > > > So same initial residual, same results for the matrix * unity vector product, same residual at iteration 40. > I always used the options: > > OptionTable: -ksp_truemonitor > OptionTable: -log_summary > > Any ideas will be very welcome, don't hesitate if you need additional tests. > > It sound, perhaps, reuse of a buffer that has not been properly released ? > > Best regards, > Etienne > > ------------------------------------------------------------------------ > With 2.3.0: Using Petsc Release Version 2.3.0, Patch 44, April, 26, 2005 > > RUN1: > > Norm A*One = 6838.31173987650 > > * Resolution method : Preconditionned Conjugate Residual > * Preconditionner : BJACOBI with ILU, Blocks of 1 > * > * Initial Residual : 1.50972105381228e+006 > Iteration= 1 residual= 9.59236416e+004 tolerance= 7.54860527e-002 > Iteration= 2 residual= 8.46044988e+004 tolerance= 1.50972105e-001 > > Iteration= 66 residual= 3.73014307e+001 tolerance= 4.98207948e+000 > Iteration= 67 residual= 3.75579067e+001 tolerance= 5.05756553e+000 > Iteration= 68 residual= 3.19392727e+000 tolerance= 5.13305158e+000 * > * Number of iterations : 68 > * Convergency code : 3 > * Final Residual Norm : 3.19392726797939e+000 > * PETSC : Resolution time : 1.000389 seconds > > > RUN2: > > Norme A*Un = 6838.31173987650 > * Resolution method : Preconditionned Conjugate Residual > * Preconditionner : BJACOBI with ILU, Blocks of 1 > * > * Initial Residual : 1.50972105381228e+006 > Iteration= 1 residual= 9.59236416e+004 tolerance= 7.54860527e-002 > Iteration= 2 residual= 8.46044988e+004 tolerance= 1.50972105e-001 > Iteration= 10 residual= 2.73382943e+004 tolerance= 7.54860527e-001 > Iteration= 20 residual= 7.27122933e+003 tolerance= 1.50972105e+000 > Iteration= 30 residual= 8.42209039e+003 tolerance= 2.26458158e+000 > Iteration= 40 residual= 2.64670054e+003 tolerance= 3.01944211e+000 > Iteration= 50 residual= 3.17446784e+002 tolerance= 3.77430263e+000 > Iteration= 60 residual= 3.53234217e+001 tolerance= 4.52916316e+000 > Iteration= 66 residual= 3.73014307e+001 tolerance= 4.98207948e+000 > Iteration= 67 residual= 3.75579067e+001 tolerance= 5.05756553e+000 > Iteration= 68 residual= 3.19392727e+000 tolerance= 5.13305158e+000 > * > * Number of iterations : 68 > * Convergency code : 3 > * Final Residual Norm : 3.19392726797939e+000 > * PETSC : Resolution time : 0.888913 seconds > > > ******************************************************************************************************************************************************** > > WITH 2.3.3p8: > > > Using Petsc Release Version 2.3.3, Patch 8, Fri Nov 16 17:03:40 CST 2007 HG revision: 414581156e67e55c761739b0deb119f7590d0f4b > > RUN1: > Norme A*Un = 6838.31173987650 > * Resolution method : Preconditionned Conjugate Residual > * Preconditionner : BJACOBI with ILU, Blocks of 1 > * > * Initial Residual : 1.50972105381228e+006 > Iteration= 1 residual= 9.59236416e+004 tolerance= 7.54860527e-002 > Iteration= 2 residual= 8.46044988e+004 tolerance= 1.50972105e-001 > Iteration= 10 residual= 2.73382943e+004 tolerance= 7.54860527e-001 > Iteration= 20 residual= 7.27122933e+003 tolerance= 1.50972105e+000 > Iteration= 30 residual= 8.42209039e+003 tolerance= 2.26458158e+000 > Iteration= 40 residual= 2.64670054e+003 tolerance= 3.01944211e+000 > Iteration= 50 residual= 3.17446756e+002 tolerance= 3.77430263e+000 > Iteration= 60 residual= 3.53234489e+001 tolerance= 4.52916316e+000 > Iteration= 65 residual= 7.12874932e+000 tolerance= 4.90659342e+000 > Iteration= 66 residual= 3.72396571e+001 tolerance= 4.98207948e+000 > Iteration= 67 residual= 3.75096723e+001 tolerance= 5.05756553e+000 > Iteration= 68 residual= 3.19515221e+000 tolerance= 5.13305158e+000 > * > * Number of iterations : 68 > * Convergency code : 3 > * Final Residual Norm : 3.19515221050523e+000 > * PETSC : Resolution time : 0.928915 seconds > > RUN2: > > Norme A*Un = 6838.31173987650 > * Resolution method : Preconditionned Conjugate Residual > * Preconditionner : BJACOBI with ILU, Blocks of 1 > * > * Initial Residual : 1.50972105381228e+006 > Iteration= 1 residual= 9.59236416e+004 tolerance= 7.54860527e-002 > Iteration= 2 residual= 8.46044988e+004 tolerance= 1.50972105e-001 > Iteration= 10 residual= 2.73382943e+004 tolerance= 7.54860527e-001 > Iteration= 20 residual= 7.27122933e+003 tolerance= 1.50972105e+000 > Iteration= 30 residual= 8.42209039e+003 tolerance= 2.26458158e+000 > Iteration= 40 residual= 2.64670054e+003 tolerance= 3.01944211e+000 > Iteration= 50 residual= 3.17446774e+002 tolerance= 3.77430263e+000 > Iteration= 60 residual= 3.53233608e+001 tolerance= 4.52916316e+000 > Iteration= 65 residual= 7.12937602e+000 tolerance= 4.90659342e+000 > Iteration= 66 residual= 3.72832632e+001 tolerance= 4.98207948e+000 > Iteration= 67 residual= 3.75447170e+001 tolerance= 5.05756553e+000 > Iteration= 68 residual= 3.19369843e+000 tolerance= 5.13305158e+000 > * > * Number of iterations : 68 > * Convergency code : 3 > * Final Residual Norm : 3.19369843187027e+000 > * PETSC : Resolution time : 0.872702 seconds > Etienne > > > -----Message d'origine----- > De : owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] De la part de Matthew Knepley > Envoy? : mercredi 24 septembre 2008 19:15 > ? : petsc-users at mcs.anl.gov > Objet : Re: Non repetability issue and difference between 2.3.0 and 2.3.3 > > On Wed, Sep 24, 2008 at 11:21 AM, Etienne PERCHAT > wrote: >> Dear Petsc users, >> >> >> >> I come again with my comparisons between v2.3.0 and v2.3.3p8. >> >> >> >> I face a non repeatability issue with v2.3.3 that I didn't have with v2.3.0. >> >> I have read the exchanges made in March on a related subject but in my case >> it is at the first linear system solution that two successive runs differ. >> >> >> >> >> >> It happens when the number of processors used is greater than 2, even on a >> standard PC. >> >> I am solving MPIBAIJ symmetric systems with the Conjugate Residual method >> preconditioned ILU(1) and Block Jacobi between subdomains. >> >> This system is the results of a FE assembly on an unstructured mesh. >> >> >> >> I made all the runs using -log_summary and -ksp_truemonitor. >> >> >> >> Starting with the same initial matrix and RHS, each run using 2.3.3p8 >> provides slightly different results while we obtain exactly the same >> solution with v2.3.0. >> >> >> >> With Petsc 2.3.3p8: >> >> >> >> Run1: Iteration= 68 residual= 3.19515221e+000 tolerance= >> 5.13305158e+000 0 >> >> Run2: Iteration= 68 residual= 3.19588481e+000 tolerance= >> 5.13305158e+000 0 >> >> Run3: Iteration= 68 residual= 3.19384417e+000 tolerance= >> 5.13305158e+000 0 >> >> >> >> With Petsc 2.3.0: >> >> >> >> Run1: Iteration= 68 residual= 3.19369843e+000 tolerance= >> 5.13305158e+000 0 >> >> Run2: Iteration= 68 residual= 3.19369843e+000 tolerance= >> 5.13305158e+000 0 >> >> >> >> If I made a 4proc run with a mesh partitioning such that any node could be >> located on more than 2 proc. I did not face the problem. > > It is not clear whether you have verified that on different runs, the > partitioning is > exactly the same. > > Matt > >> I first thought about a MPI problem related to the order in which messages >> are received and then summed. >> >> But it would have been exactly the same with 2.3.0 ? >> >> >> >> Any tips/ideas ? >> >> >> >> Thanks by advance. >> >> Best regards, >> >> >> >> Etienne Perchat > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Thu Sep 25 11:17:20 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 25 Sep 2008 11:17:20 -0500 Subject: ****MJ-REJECTED**** RE: Non repetability issue and difference between 2.3.0 and 2.3.3 In-Reply-To: <9113A52E1096EB41B1F88DD94C4369D5353D39@EXCHSRV.transvalor.com> References: <9113A52E1096EB41B1F88DD94C4369D5353D39@EXCHSRV.transvalor.com> Message-ID: <992D322C-C6F4-4C95-8ED6-06F8A71A2BCF@mcs.anl.gov> Are the old and new version of PETSc both using the EXACT SAME MPI? Barry On Sep 25, 2008, at 11:15 AM, Etienne PERCHAT wrote: > > Matt, > > Thanks a lot for your answers. > > Please find attached the asked files zipped (I used an ascii output). > > I tested with GMRES but I have PETSC_ERROR s. I don't understand > why ... > > [0]PETSC ERROR: KSPSolve_GMRES() line 231 in > src/ksp/ksp/impls/gmres/d:\DEVELO~1\TEST_EP\PETSC-~1.3-P\PETSC-~1.3-P > \sr > c\ > sp\ksp\impls\gmres\gmres.c > [0]PETSC ERROR: KSPSolve() line 379 in > src/ksp/ksp/interface/d:\DEVELO~1\TEST_EP\PETSC-~1.3-P\PETSC-~1.3-P > \src\ > ksp\ksp\ > NTERF~1\itfunc.c > [2]PETSC ERROR: Object is in wrong state! > [2]PETSC ERROR: Currently can use GMRES with only preconditioned > residual (right preconditioning not coded)! > > > I tried with BICG, BICGSTAB and FGMRS. It works and I notice the same > behaviour. > > I tried using another preconditionner (ie SOR) with again PCR it is > even > worse: > > Iteration= 171 residual= 1.164976140862921e+001 > Iteration= 171 residual= 1.171859643971711e+001 > > I would like to stress also that it was inexact for me to say that the > residual vary around iteration 40. It happens earlier around > increment 6 > (see below) > > > > * Initial Residual : 1.50972105381228e+006 > Iteration= 1 residual= 9.592364159822660e+004 > Iteration= 2 residual= 8.460449880782055e+004 > Iteration= 3 residual= 1.020430403443245e+005 > Iteration= 4 residual= 3.924181572770592e+004 > Iteration= 5 residual= 2.715343433370132e+004 > Iteration= 6 residual= 3.795381174726432e+004 > Iteration= 7 residual= 2.435841639251617e+004 > Iteration= 8 residual= 1.455427563694975e+004 > Iteration= 9 residual= 2.386233690946949e+004 > Iteration= 10 residual= 2.733829425421920e+004 > Iteration= 15 residual= 1.322501801249381e+004 > Iteration= 20 residual= 7.271229329992430e+003 > > > * Initial Residual : 1.50972105381228e+06 > Iteration= 1 residual= 9.592364159822660e+004 > Iteration= 2 residual= 8.460449880782055e+004 > Iteration= 3 residual= 1.020430403443245e+005 > Iteration= 4 residual= 3.924181572770592e+004 > Iteration= 5 residual= 2.715343433370132e+004 > Iteration= 6 residual= 3.795381174726426e+004 > Iteration= 7 residual= 2.435841639251620e+004 > Iteration= 8 residual= 1.455427563694976e+004 > Iteration= 9 residual= 2.386233690946967e+004 > Iteration= 10 residual= 2.733829425421909e+004 > Iteration= 15 residual= 1.322501801248572e+004 > Iteration= 20 residual= 7.271229329990202e+003 > > > Etienne > From etienne.perchat at transvalor.com Thu Sep 25 12:42:34 2008 From: etienne.perchat at transvalor.com (Etienne PERCHAT) Date: Thu, 25 Sep 2008 19:42:34 +0200 Subject: ****MJ-REJECTED**** RE: Non repetability issue and difference between 2.3.0 and 2.3.3 Message-ID: <9113A52E1096EB41B1F88DD94C4369D5353D42@EXCHSRV.transvalor.com> Hi Barry, yes we are in mpich2 1.0.2p1. Etienne -----Message d'origine----- De?: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] De la part de Barry Smith Envoy??: jeudi 25 septembre 2008 18:17 ??: Etienne PERCHAT Cc?: petsc-users at mcs.anl.gov Objet?: Re: ****MJ-REJECTED**** RE: Non repetability issue and difference between 2.3.0 and 2.3.3 Are the old and new version of PETSc both using the EXACT SAME MPI? Barry On Sep 25, 2008, at 11:15 AM, Etienne PERCHAT wrote: > > Matt, > > Thanks a lot for your answers. > > Please find attached the asked files zipped (I used an ascii output). > > I tested with GMRES but I have PETSC_ERROR s. I don't understand > why ... > > [0]PETSC ERROR: KSPSolve_GMRES() line 231 in > src/ksp/ksp/impls/gmres/d:\DEVELO~1\TEST_EP\PETSC-~1.3-P\PETSC-~1.3-P > \sr > c\ > sp\ksp\impls\gmres\gmres.c > [0]PETSC ERROR: KSPSolve() line 379 in > src/ksp/ksp/interface/d:\DEVELO~1\TEST_EP\PETSC-~1.3-P\PETSC-~1.3-P > \src\ > ksp\ksp\ > NTERF~1\itfunc.c > [2]PETSC ERROR: Object is in wrong state! > [2]PETSC ERROR: Currently can use GMRES with only preconditioned > residual (right preconditioning not coded)! > > > I tried with BICG, BICGSTAB and FGMRS. It works and I notice the same > behaviour. > > I tried using another preconditionner (ie SOR) with again PCR it is > even > worse: > > Iteration= 171 residual= 1.164976140862921e+001 > Iteration= 171 residual= 1.171859643971711e+001 > > I would like to stress also that it was inexact for me to say that the > residual vary around iteration 40. It happens earlier around > increment 6 > (see below) > > > > * Initial Residual : 1.50972105381228e+006 > Iteration= 1 residual= 9.592364159822660e+004 > Iteration= 2 residual= 8.460449880782055e+004 > Iteration= 3 residual= 1.020430403443245e+005 > Iteration= 4 residual= 3.924181572770592e+004 > Iteration= 5 residual= 2.715343433370132e+004 > Iteration= 6 residual= 3.795381174726432e+004 > Iteration= 7 residual= 2.435841639251617e+004 > Iteration= 8 residual= 1.455427563694975e+004 > Iteration= 9 residual= 2.386233690946949e+004 > Iteration= 10 residual= 2.733829425421920e+004 > Iteration= 15 residual= 1.322501801249381e+004 > Iteration= 20 residual= 7.271229329992430e+003 > > > * Initial Residual : 1.50972105381228e+06 > Iteration= 1 residual= 9.592364159822660e+004 > Iteration= 2 residual= 8.460449880782055e+004 > Iteration= 3 residual= 1.020430403443245e+005 > Iteration= 4 residual= 3.924181572770592e+004 > Iteration= 5 residual= 2.715343433370132e+004 > Iteration= 6 residual= 3.795381174726426e+004 > Iteration= 7 residual= 2.435841639251620e+004 > Iteration= 8 residual= 1.455427563694976e+004 > Iteration= 9 residual= 2.386233690946967e+004 > Iteration= 10 residual= 2.733829425421909e+004 > Iteration= 15 residual= 1.322501801248572e+004 > Iteration= 20 residual= 7.271229329990202e+003 > > > Etienne > From knepley at gmail.com Thu Sep 25 14:06:15 2008 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 25 Sep 2008 14:06:15 -0500 Subject: ****MJ-REJECTED**** RE: Non repetability issue and difference between 2.3.0 and 2.3.3 In-Reply-To: <9113A52E1096EB41B1F88DD94C4369D5353D42@EXCHSRV.transvalor.com> References: <9113A52E1096EB41B1F88DD94C4369D5353D42@EXCHSRV.transvalor.com> Message-ID: On Thu, Sep 25, 2008 at 12:42 PM, Etienne PERCHAT wrote: > Hi Barry, > > yes we are in mpich2 1.0.2p1. As before, we really need to see the system and rhs, or the whole code. Matt > Etienne > > -----Message d'origine----- > De : owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] De la part de Barry Smith > Envoy? : jeudi 25 septembre 2008 18:17 > ? : Etienne PERCHAT > Cc : petsc-users at mcs.anl.gov > Objet : Re: ****MJ-REJECTED**** RE: Non repetability issue and difference between 2.3.0 and 2.3.3 > > > > Are the old and new version of PETSc both using the EXACT SAME MPI? > > Barry > > On Sep 25, 2008, at 11:15 AM, Etienne PERCHAT wrote: > >> >> Matt, >> >> Thanks a lot for your answers. >> >> Please find attached the asked files zipped (I used an ascii output). >> >> I tested with GMRES but I have PETSC_ERROR s. I don't understand >> why ... >> >> [0]PETSC ERROR: KSPSolve_GMRES() line 231 in >> src/ksp/ksp/impls/gmres/d:\DEVELO~1\TEST_EP\PETSC-~1.3-P\PETSC-~1.3-P >> \sr >> c\ >> sp\ksp\impls\gmres\gmres.c >> [0]PETSC ERROR: KSPSolve() line 379 in >> src/ksp/ksp/interface/d:\DEVELO~1\TEST_EP\PETSC-~1.3-P\PETSC-~1.3-P >> \src\ >> ksp\ksp\ >> NTERF~1\itfunc.c >> [2]PETSC ERROR: Object is in wrong state! >> [2]PETSC ERROR: Currently can use GMRES with only preconditioned >> residual (right preconditioning not coded)! >> >> >> I tried with BICG, BICGSTAB and FGMRS. It works and I notice the same >> behaviour. >> >> I tried using another preconditionner (ie SOR) with again PCR it is >> even >> worse: >> >> Iteration= 171 residual= 1.164976140862921e+001 >> Iteration= 171 residual= 1.171859643971711e+001 >> >> I would like to stress also that it was inexact for me to say that the >> residual vary around iteration 40. It happens earlier around >> increment 6 >> (see below) >> >> >> >> * Initial Residual : 1.50972105381228e+006 >> Iteration= 1 residual= 9.592364159822660e+004 >> Iteration= 2 residual= 8.460449880782055e+004 >> Iteration= 3 residual= 1.020430403443245e+005 >> Iteration= 4 residual= 3.924181572770592e+004 >> Iteration= 5 residual= 2.715343433370132e+004 >> Iteration= 6 residual= 3.795381174726432e+004 >> Iteration= 7 residual= 2.435841639251617e+004 >> Iteration= 8 residual= 1.455427563694975e+004 >> Iteration= 9 residual= 2.386233690946949e+004 >> Iteration= 10 residual= 2.733829425421920e+004 >> Iteration= 15 residual= 1.322501801249381e+004 >> Iteration= 20 residual= 7.271229329992430e+003 >> >> >> * Initial Residual : 1.50972105381228e+06 >> Iteration= 1 residual= 9.592364159822660e+004 >> Iteration= 2 residual= 8.460449880782055e+004 >> Iteration= 3 residual= 1.020430403443245e+005 >> Iteration= 4 residual= 3.924181572770592e+004 >> Iteration= 5 residual= 2.715343433370132e+004 >> Iteration= 6 residual= 3.795381174726426e+004 >> Iteration= 7 residual= 2.435841639251620e+004 >> Iteration= 8 residual= 1.455427563694976e+004 >> Iteration= 9 residual= 2.386233690946967e+004 >> Iteration= 10 residual= 2.733829425421909e+004 >> Iteration= 15 residual= 1.322501801248572e+004 >> Iteration= 20 residual= 7.271229329990202e+003 >> >> >> Etienne >> > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From etienne.perchat at transvalor.com Fri Sep 26 01:23:03 2008 From: etienne.perchat at transvalor.com (Etienne PERCHAT) Date: Fri, 26 Sep 2008 08:23:03 +0200 Subject: ****MJ-REJECTED**** RE: Non repetability issue and difference between 2.3.0 and 2.3.3 Message-ID: <9113A52E1096EB41B1F88DD94C4369D5353D45@EXCHSRV.transvalor.com> Hi Matt, I don't understand. I thought that what I've send to you in .zip file contained all the required information. Did you received it ? I used MatView and VecView within a PetscViewer created with PetscViewerASCIIOpen(PETSC_COMM_WORLD, NomVec, &writer); Does it contains all the require information or do you need something more? Thanks, Etienne PS: I would like to stress that we are using PetSc since quite a long time that it works WONDERFULLY well and that we are fully satisfied and really grateful to have the opportunity of using it. -----Message d'origine----- De?: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] De la part de Matthew Knepley Envoy??: jeudi 25 septembre 2008 21:06 ??: petsc-users at mcs.anl.gov Objet?: Re: ****MJ-REJECTED**** RE: Non repetability issue and difference between 2.3.0 and 2.3.3 On Thu, Sep 25, 2008 at 12:42 PM, Etienne PERCHAT wrote: > Hi Barry, > > yes we are in mpich2 1.0.2p1. As before, we really need to see the system and rhs, or the whole code. Matt From bruno.zerbo at gmail.com Fri Sep 26 04:00:45 2008 From: bruno.zerbo at gmail.com (bruno.zerbo at gmail.com) Date: Fri, 26 Sep 2008 11:00:45 +0200 Subject: Mat creation for tridiagonal matrix Message-ID: <200809261100.45773.bruno.zerbo@gmail.com> Hi, I'm new in the use of PETsc. What kind of Mat creation routine is optimized for tridiagonal matrix? Thank you and compliments for the good work Bruno Zerbo From z.sheng at ewi.tudelft.nl Fri Sep 26 10:19:48 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Fri, 26 Sep 2008 17:19:48 +0200 Subject: how to load a matlab matrix? In-Reply-To: <200809261100.45773.bruno.zerbo@gmail.com> References: <200809261100.45773.bruno.zerbo@gmail.com> Message-ID: <48DCFD94.20301@ewi.tudelft.nl> Dear all I know that Petsc can print a matrix in matlab format. can it read a matlab format matrix? (in the same format, it prints?) I check Matload, and it does not work. Thanks a lot best regards Zhifeng From bsmith at mcs.anl.gov Fri Sep 26 10:26:22 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 26 Sep 2008 10:26:22 -0500 Subject: how to load a matlab matrix? In-Reply-To: <48DCFD94.20301@ewi.tudelft.nl> References: <200809261100.45773.bruno.zerbo@gmail.com> <48DCFD94.20301@ewi.tudelft.nl> Message-ID: <1EBBA61F-9FEC-42A1-B3E9-81ED70F611BF@mcs.anl.gov> On Sep 26, 2008, at 10:19 AM, zhifeng sheng wrote: > Dear all > > I know that Petsc can print a matrix in matlab format. can it read a > matlab format matrix? (in the same format, it prints?) > No, if you want to read/write for Matlab you should use the binary format. The Matlab programs in bin/matlab/PetscBinaryRead/Write.m are for use in Matlab. Reason: the ASCII format for reading/writing is VERY slow for larger matrices. The binary format is fast. Barry > I check Matload, and it does not work. > > Thanks a lot > best regards > Zhifeng > From z.sheng at ewi.tudelft.nl Fri Sep 26 10:40:16 2008 From: z.sheng at ewi.tudelft.nl (zhifeng sheng) Date: Fri, 26 Sep 2008 17:40:16 +0200 Subject: how to load a matlab matrix? In-Reply-To: <1EBBA61F-9FEC-42A1-B3E9-81ED70F611BF@mcs.anl.gov> References: <200809261100.45773.bruno.zerbo@gmail.com> <48DCFD94.20301@ewi.tudelft.nl> <1EBBA61F-9FEC-42A1-B3E9-81ED70F611BF@mcs.anl.gov> Message-ID: <48DD0260.60502@ewi.tudelft.nl> Barry Smith wrote: > > On Sep 26, 2008, at 10:19 AM, zhifeng sheng wrote: > >> Dear all >> >> I know that Petsc can print a matrix in matlab format. can it read a >> matlab format matrix? (in the same format, it prints?) >> > > No, if you want to read/write for Matlab you should use the binary > format. The Matlab programs > in bin/matlab/PetscBinaryRead/Write.m are for use in Matlab. > > Reason: the ASCII format for reading/writing is VERY slow for > larger matrices. The binary format is fast. > > > Barry > >> I check Matload, and it does not work. >> >> Thanks a lot >> best regards >> Zhifeng >> > But I am a matlab script as big as 1.2 G, and matlab can not handle it.... I am hoping by putting it in petsc, I can do something with it. Any suggestions? thanks Zhifeng From bsmith at mcs.anl.gov Fri Sep 26 11:27:38 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 26 Sep 2008 11:27:38 -0500 Subject: how to load a matlab matrix? In-Reply-To: <48DD0260.60502@ewi.tudelft.nl> References: <200809261100.45773.bruno.zerbo@gmail.com> <48DCFD94.20301@ewi.tudelft.nl> <1EBBA61F-9FEC-42A1-B3E9-81ED70F611BF@mcs.anl.gov> <48DD0260.60502@ewi.tudelft.nl> Message-ID: <8B4CB85E-9BA2-475F-8FB1-30B057EFEC13@mcs.anl.gov> I do not understand your question. You can use the PETSc binary viewer to move matrices/vectors back and forth between PETSc and Matlab. Barry On Sep 26, 2008, at 10:40 AM, zhifeng sheng wrote: > Barry Smith wrote: >> >> On Sep 26, 2008, at 10:19 AM, zhifeng sheng wrote: >> >>> Dear all >>> >>> I know that Petsc can print a matrix in matlab format. can it read >>> a matlab format matrix? (in the same format, it prints?) >>> >> >> No, if you want to read/write for Matlab you should use the >> binary format. The Matlab programs >> in bin/matlab/PetscBinaryRead/Write.m are for use in Matlab. >> >> Reason: the ASCII format for reading/writing is VERY slow for >> larger matrices. The binary format is fast. >> >> >> Barry >> >>> I check Matload, and it does not work. >>> >>> Thanks a lot >>> best regards >>> Zhifeng >>> >> > But I am a matlab script as big as 1.2 G, and matlab can not handle > it.... > > I am hoping by putting it in petsc, I can do something with it. > > Any suggestions? > > thanks > Zhifeng > From knepley at gmail.com Fri Sep 26 11:31:54 2008 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 26 Sep 2008 11:31:54 -0500 Subject: ****MJ-REJECTED**** RE: Non repetability issue and difference between 2.3.0 and 2.3.3 In-Reply-To: <9113A52E1096EB41B1F88DD94C4369D5353D45@EXCHSRV.transvalor.com> References: <9113A52E1096EB41B1F88DD94C4369D5353D45@EXCHSRV.transvalor.com> Message-ID: Shoot, I am looking through my mail but cannot find it. Can you send it to petsc-maint? Thanks, Matt On Fri, Sep 26, 2008 at 1:23 AM, Etienne PERCHAT wrote: > > Hi Matt, > > I don't understand. I thought that what I've send to you in .zip file contained all the required information. > > Did you received it ? > > I used MatView and VecView within a PetscViewer created with PetscViewerASCIIOpen(PETSC_COMM_WORLD, NomVec, &writer); > Does it contains all the require information or do you need something more? > > Thanks, > Etienne > > PS: I would like to stress that we are using PetSc since quite a long time that it works WONDERFULLY well and that we are fully satisfied and really grateful to have the opportunity of using it. > > > > -----Message d'origine----- > De : owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] De la part de Matthew Knepley > Envoy? : jeudi 25 septembre 2008 21:06 > ? : petsc-users at mcs.anl.gov > Objet : Re: ****MJ-REJECTED**** RE: Non repetability issue and difference between 2.3.0 and 2.3.3 > > On Thu, Sep 25, 2008 at 12:42 PM, Etienne PERCHAT > wrote: >> Hi Barry, >> >> yes we are in mpich2 1.0.2p1. > > As before, we really need to see the system and rhs, or the whole code. > > Matt > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From knepley at gmail.com Fri Sep 26 11:35:04 2008 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 26 Sep 2008 11:35:04 -0500 Subject: Mat creation for tridiagonal matrix In-Reply-To: <200809261100.45773.bruno.zerbo@gmail.com> References: <200809261100.45773.bruno.zerbo@gmail.com> Message-ID: We do not have a band matrix type. I believe there are good parallel tridiagonal solvers out there, but currently we do not support them. However, you can use AIJ without much overhead. Matt On Fri, Sep 26, 2008 at 4:00 AM, wrote: > Hi, I'm new in the use of PETsc. > What kind of Mat creation routine is optimized for tridiagonal matrix? > Thank you and compliments for the good work > Bruno Zerbo -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bhatiamanav at gmail.com Fri Sep 26 12:08:14 2008 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Fri, 26 Sep 2008 13:08:14 -0400 Subject: complex support Message-ID: <59554F43-40A1-4BCC-BFBF-1294261E8850@gmail.com> Hi, I am writing a code where I will need both double and complex support for separate calculations. I can certainly build a complex petsc library and keep the imaginary value as zero to get to double, but that would not be very efficient. If there a way to do mixed programming with the same petsc library without paying this penalty? Thanks, Manav From knepley at gmail.com Fri Sep 26 12:30:24 2008 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 26 Sep 2008 12:30:24 -0500 Subject: complex support In-Reply-To: <59554F43-40A1-4BCC-BFBF-1294261E8850@gmail.com> References: <59554F43-40A1-4BCC-BFBF-1294261E8850@gmail.com> Message-ID: Currently, no. C is not amenable to this kind of programming. Matt On Fri, Sep 26, 2008 at 12:08 PM, Manav Bhatia wrote: > Hi, > > I am writing a code where I will need both double and complex support for > separate calculations. I can certainly build a complex petsc library and > keep the imaginary value as zero to get to double, but that would not be > very efficient. > > If there a way to do mixed programming with the same petsc library without > paying this penalty? > > Thanks, > Manav > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From TMO at htri.net Fri Sep 26 12:40:02 2008 From: TMO at htri.net (Thomas M. Ortiz) Date: Fri, 26 Sep 2008 12:40:02 -0500 Subject: viewer interfaces between PETSc and Matlab clones Message-ID: <24DC8EF59D8E3A439DDA670D508C601A0601DA805A@HTRIMBX.HTRI.net> Hello I would like to start using PETSc in a software package I'm developing and, when considering results presentation, have considered making use of either GNU Octave or SciLab as a viewer/post-processor. I read in the PETSc User's Manual that there is a Matlab interface which can generate Matlab-compatible representations of matrices and vectors for viewing purposes and even launch Matlab sessions. I also believe the latest version of GNU Octave has support for Matlab graphics. Has anyone had experience rendering PETSc results in any of these packages who could offer any advice? Is there a PETSc API with which I could develop a tool to launch Octave or SciLab from PETSc to mirror what is possible with Matlab? Thanks in advance Tom Ortiz -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Sep 26 12:47:38 2008 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 26 Sep 2008 12:47:38 -0500 Subject: viewer interfaces between PETSc and Matlab clones In-Reply-To: <24DC8EF59D8E3A439DDA670D508C601A0601DA805A@HTRIMBX.HTRI.net> References: <24DC8EF59D8E3A439DDA670D508C601A0601DA805A@HTRIMBX.HTRI.net> Message-ID: If your graphics are 2D, I would recommend petsc4py and matplotlib. Matt On Fri, Sep 26, 2008 at 12:40 PM, Thomas M. Ortiz wrote: > Hello > > > > I would like to start using PETSc in a software package I'm developing and, > when considering results presentation, have considered making use of either > GNU Octave or SciLab as a viewer/post-processor. I read in the PETSc User's > Manual that there is a Matlab interface which can generate Matlab-compatible > representations of matrices and vectors for viewing purposes and even launch > Matlab sessions. > > > > I also believe the latest version of GNU Octave has support for Matlab > graphics. > > > > Has anyone had experience rendering PETSc results in any of these packages > who could offer any advice? Is there a PETSc API with which I could develop > a tool to launch Octave or SciLab from PETSc to mirror what is possible with > Matlab? > > > > Thanks in advance > > > > Tom Ortiz > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Fri Sep 26 12:54:31 2008 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 26 Sep 2008 12:54:31 -0500 Subject: ****MJ-REJECTED**** RE: Non repetability issue and difference between 2.3.0 and 2.3.3 In-Reply-To: References: <9113A52E1096EB41B1F88DD94C4369D5353D45@EXCHSRV.transvalor.com> Message-ID: <026C6BFB-DDC7-4B1D-95E0-ED8A88DDA401@mcs.anl.gov> .zip files are automatically destroyed by our SPAM blocker. Please send a compressed tar file or something similar to petsc-maint at mcs.anl.gov On Sep 26, 2008, at 11:31 AM, Matthew Knepley wrote: > Shoot, I am looking through my mail but cannot find it. Can you send > it to petsc-maint? > > Thanks, > > Matt > > On Fri, Sep 26, 2008 at 1:23 AM, Etienne PERCHAT > wrote: >> >> Hi Matt, >> >> I don't understand. I thought that what I've send to you in .zip >> file contained all the required information. >> >> Did you received it ? >> >> I used MatView and VecView within a PetscViewer created with >> PetscViewerASCIIOpen(PETSC_COMM_WORLD, NomVec, &writer); >> Does it contains all the require information or do you need >> something more? >> >> Thanks, >> Etienne >> >> PS: I would like to stress that we are using PetSc since quite a >> long time that it works WONDERFULLY well and that we are fully >> satisfied and really grateful to have the opportunity of using it. >> >> >> >> -----Message d'origine----- >> De : owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov >> ] De la part de Matthew Knepley >> Envoy? : jeudi 25 septembre 2008 21:06 >> ? : petsc-users at mcs.anl.gov >> Objet : Re: ****MJ-REJECTED**** RE: Non repetability issue and >> difference between 2.3.0 and 2.3.3 >> >> On Thu, Sep 25, 2008 at 12:42 PM, Etienne PERCHAT >> wrote: >>> Hi Barry, >>> >>> yes we are in mpich2 1.0.2p1. >> >> As before, we really need to see the system and rhs, or the whole >> code. >> >> Matt >> >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > From bhatiamanav at gmail.com Sun Sep 28 12:44:52 2008 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Sun, 28 Sep 2008 13:44:52 -0400 Subject: block matrices In-Reply-To: <20080922084357.GF6975@brakk.ethz.ch> References: <3CC02FBD-8B47-4A94-B383-F258E02C9521@gmail.com> <20080922084357.GF6975@brakk.ethz.ch> Message-ID: <7EC01884-761E-4C18-B9F8-645803D22DD1@gmail.com> Hi Jed, Thanks for your insightful comments on this. I randomly came up with this B sub-structure and did not pay any attention to the indefinite nature of the matrix. My main aim was to get across the idea of what I wanted to achieve. In my application, I already have a lot of code that calculates the individual sub-matrices S1, S2,... Hence, as much as possible, I would like to be able to build the B matrix from these matrices. This brings me to two questions: I was reading the Petsc manual pages and came across the following functions: MatCreateBlockMat, MatCreateSeqAIJWithArrays and MatCreateSeqBAIJ. 1> So, if I have the S1,S2,...,S5, and I want to build the B matrix as mentioned below, would the BAIJ matrix be better to use than the AIJ? 2> And which one of the above three methods would be best to create B ? I would appreciate any comments on this. Thanks, Manav On Sep 22, 2008, at 4:43 AM, Jed Brown wrote: > On Sun 2008-09-21 12:11, Manav Bhatia wrote: >> B = 3 x 3 blocks >> >> row 1 of B = 0, S1, 0 >> row 2 of B = S2, 0 , S3 >> row 3 of B = S4, S5, 0 > > What sort of preconditioner do you intend to use? If you are using a > direct solver, then you will need to explicitly assemble B. This > can be > done in a black-box manner from the sub-matrices, but it might be > better > to assemble B and extract the submatrices using MatGetSubMatrix() > (assuming you need them elsewhere). If you will be using an iterative > solver, normal preconditioners will fail because the matrix is > indefinite. In this case, you can create a MATSHELL (which implements > MatMult, the action of B on a vector) and a PCSHELL which > approximately > inverts B using a block factorization. > > Jed From bhatiamanav at gmail.com Sun Sep 28 13:22:52 2008 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Sun, 28 Sep 2008 14:22:52 -0400 Subject: block matrices In-Reply-To: References: <3CC02FBD-8B47-4A94-B383-F258E02C9521@gmail.com> Message-ID: <9445A591-465E-46FA-B019-80B977AE4CD6@gmail.com> Hi, I was reading the Petsc manual pages and came across the following functions: MatCreateBlockMat, MatCreateSeqAIJWithArrays and MatCreateSeqBAIJ. 1> So, if I have the S1,S2,...,S5, and I want to build the B matrix as mentioned below, would the BAIJ matrix be better to use than the AIJ? How about BlockMat? 2> And which one of the above three methods would be best to create B ? I would appreciate any comments on this. Thanks, Manav On Sep 21, 2008, at 12:50 PM, Hong Zhang wrote: > > Manav, > > On Sun, 21 Sep 2008, Manav Bhatia wrote: > >> Hi, >> >> I have an application in which I have multiple sparse matrices, >> which together form one bigger matrix. For example: if S1, S2, S3, S4 >> and S5 are my sparse matrices, I need to create a matrix B of the >> following form >> >> B = 3 x 3 blocks >> >> row 1 of B = 0, S1, 0 >> row 2 of B = S2, 0 , S3 >> row 3 of B = S4, S5, 0 >> >> If I build the S1... S5 independently, is there a way for me to >> directly embed these matrices into B without having to explicitly >> copy >> the values from each matrix? > > No, we do not have this function. > You can use MatSetValues() > http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatSetValues.html > > to insert a block of values. > > Hong > >> I would appreciate any help. Please also, let me know if there is an >> example code somewhere about this. >> >> Regards, >> Manav >> >> >> >> > From bhatiamanav at gmail.com Sun Sep 28 14:48:47 2008 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Sun, 28 Sep 2008 15:48:47 -0400 Subject: BAIJ and AIJ formats Message-ID: <743CE57A-5098-49D6-8848-EECE2E85EB9C@gmail.com> Hi, I have a few questions about the block format matrix. In the function: MatCreateSeqBAIJ, the arguments are bs - size of block m - number of rows n - number of columns nz - number of nonzero blocks per block row (same for all rows) nnz - array containing the number of nonzero blocks in the various block rows (possibly different for each block row) or PETSC_NULL If I specify the nnz vector, then what is the dimension of this vector? Is that equal to the block size? If so, then is it assumed that all blocks have the same number of non-zeros per row? If my blocks have different non-zero patterns, then should I use an AIJ format instead of a BAIJ format? Thanks, Manav -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhatiamanav at gmail.com Sun Sep 28 14:50:41 2008 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Sun, 28 Sep 2008 15:50:41 -0400 Subject: BAIJ and AIJ formats In-Reply-To: <743CE57A-5098-49D6-8848-EECE2E85EB9C@gmail.com> References: <743CE57A-5098-49D6-8848-EECE2E85EB9C@gmail.com> Message-ID: <326C6861-0712-44AA-B4C1-EA81CE424C00@gmail.com> One more question: When is it advantageous to use BAIJ format instead of AIJ? Thanks, Manav On Sep 28, 2008, at 3:48 PM, Manav Bhatia wrote: > Hi, > > I have a few questions about the block format matrix. > > In the function: MatCreateSeqBAIJ, the arguments are > > bs - size of block > m - number of rows > n - number of columns > nz - number of nonzero blocks per block row (same for all rows) > nnz - array containing the number of nonzero blocks in the various > block rows (possibly different for each block row) or PETSC_NULL > > If I specify the nnz vector, then what is the dimension of this > vector? Is that equal to the block size? If so, then is it assumed > that all blocks have the same number of non-zeros per row? > > If my blocks have different non-zero patterns, then should I use an > AIJ format instead of a BAIJ format? > > Thanks, > Manav -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Sun Sep 28 16:58:42 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Sun, 28 Sep 2008 16:58:42 -0500 (CDT) Subject: BAIJ and AIJ formats In-Reply-To: <326C6861-0712-44AA-B4C1-EA81CE424C00@gmail.com> References: <743CE57A-5098-49D6-8848-EECE2E85EB9C@gmail.com> <326C6861-0712-44AA-B4C1-EA81CE424C00@gmail.com> Message-ID: Manav, If your matrix can be stored as BAIJ with dense blocks, then using BAIJ would be more memory efficient. See slides 42 of http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/tutorials/LCRC-06.pdf. Hong On Sun, 28 Sep 2008, Manav Bhatia wrote: > One more question: > When is it advantageous to use BAIJ format instead of AIJ? > > Thanks, > Manav > > > On Sep 28, 2008, at 3:48 PM, Manav Bhatia wrote: > >> Hi, >> >> I have a few questions about the block format matrix. >> >> In the function: MatCreateSeqBAIJ, the arguments are >> >> bs - size of block >> m - number of rows >> n - number of columns >> nz - number of nonzero blocks per block row (same for all rows) >> nnz - array containing the number of nonzero blocks in the various block >> rows (possibly different for each block row) or PETSC_NULL >> >> If I specify the nnz vector, then what is the dimension of this vector? Is >> that equal to the block size? If so, then is it assumed that all blocks >> have the same number of non-zeros per row? >> >> If my blocks have different non-zero patterns, then should I use an AIJ >> format instead of a BAIJ format? >> >> Thanks, >> Manav > From hzhang at mcs.anl.gov Sun Sep 28 17:05:17 2008 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Sun, 28 Sep 2008 17:05:17 -0500 (CDT) Subject: BAIJ and AIJ formats In-Reply-To: <743CE57A-5098-49D6-8848-EECE2E85EB9C@gmail.com> References: <743CE57A-5098-49D6-8848-EECE2E85EB9C@gmail.com> Message-ID: On Sun, 28 Sep 2008, Manav Bhatia wrote: > Hi, > > I have a few questions about the block format matrix. > > In the function: MatCreateSeqBAIJ, the arguments are > > bs - size of block > m - number of rows > n - number of columns > nz - number of nonzero blocks per block row (same for all rows) > nnz - array containing the number of nonzero blocks in the various block > rows (possibly different for each block row) or PETSC_NULL > > If I specify the nnz vector, then what is the dimension of this vector? Is > that equal to the block size? If so, then is it assumed that all blocks have > the same number of non-zeros per row? Treating each block of BAIJ matrix as a single entry in AIJ format, nnz is an array of size N/bs. > > If my blocks have different non-zero patterns, then should I use an AIJ > format instead of a BAIJ format? You should use AIJ format. See http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#blocks. Hong > > Thanks, > Manav From jed at 59A2.org Mon Sep 29 03:54:11 2008 From: jed at 59A2.org (Jed Brown) Date: Mon, 29 Sep 2008 10:54:11 +0200 Subject: block matrices In-Reply-To: <7EC01884-761E-4C18-B9F8-645803D22DD1@gmail.com> References: <3CC02FBD-8B47-4A94-B383-F258E02C9521@gmail.com> <20080922084357.GF6975@brakk.ethz.ch> <7EC01884-761E-4C18-B9F8-645803D22DD1@gmail.com> Message-ID: <20080929085411.GT7854@brakk.ethz.ch> On Sun 2008-09-28 13:44, Manav Bhatia wrote: > I was reading the Petsc manual pages and came across the following > functions: MatCreateBlockMat, MatCreateSeqAIJWithArrays and > MatCreateSeqBAIJ. I don't think you want to use any of these. It sounds like you are misinterpreting the meaning of block matrices in PETSc. The PETSc block matrices (BAIJ) are a way to store matrices for multi-component problems which have dense coupling between the components. These blocks are quite small, often 3 or 5 components, and there is normally one block per node in the discretization. > 1> So, if I have the S1,S2,...,S5, and I want to build the B matrix as > mentioned below, would the BAIJ matrix be better to use than the AIJ? I think you want to use AIJ. If you are using a direct solver, just iterate through the rows of your S matrices and assemble B using MatSetValues (you can use MatGetRow to access the rows of the S matrices). If you want to use an iterative solver, you will probably need to write your own preconditioner (see PCSHELL) which uses a block factorization of B. For instance, use your knowledge of the invertibility of the S matrices to perform pivoting in order to get a block LU decomposition. This will normally involve a Schur complement which you need to precondition (a very problem-specific thing). The block LU decomposition can be used to invert completely B (i.e. use the preconditioner as a `direct' method) but it may be more efficient to drop some blocks from the factorization and/or replace some inverses in the factorization with their preconditioner. Again, what is appropriate is highly problem-specific. I don't know where your matrix comes from but perhaps there is some literature on preconditioning this matrix. My guess is that effective preconditioning of B will be fairly nontrivial and involve `advanced' use of PETSc. I would recommend using a direct solver first. Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From bruno.zerbo at gmail.com Mon Sep 29 04:56:44 2008 From: bruno.zerbo at gmail.com (bruno.zerbo at gmail.com) Date: Mon, 29 Sep 2008 11:56:44 +0200 Subject: complex support In-Reply-To: References: <59554F43-40A1-4BCC-BFBF-1294261E8850@gmail.com> Message-ID: <200809291156.44865.bruno.zerbo@gmail.com> Hi, I use PetscScalar to specify a complex and PetscReal for a double, 1) is it wrong? 2) is there a constant equal to the imaginary unit? Because I create one myself. Thanks Bruno Il Friday 26 September 2008 19:30:24 Matthew Knepley ha scritto: > Currently, no. C is not amenable to this kind of programming. > > Matt > > On Fri, Sep 26, 2008 at 12:08 PM, Manav Bhatia wrote: > > Hi, > > > > I am writing a code where I will need both double and complex support > > for separate calculations. I can certainly build a complex petsc library > > and keep the imaginary value as zero to get to double, but that would not > > be very efficient. > > > > If there a way to do mixed programming with the same petsc library > > without paying this penalty? > > > > Thanks, > > Manav From jed at 59A2.org Mon Sep 29 05:23:48 2008 From: jed at 59A2.org (Jed Brown) Date: Mon, 29 Sep 2008 12:23:48 +0200 Subject: complex support In-Reply-To: <200809291156.44865.bruno.zerbo@gmail.com> References: <59554F43-40A1-4BCC-BFBF-1294261E8850@gmail.com> <200809291156.44865.bruno.zerbo@gmail.com> Message-ID: <20080929102348.GX7854@brakk.ethz.ch> On Mon 2008-09-29 11:56, bruno.zerbo at gmail.com wrote: > Hi, > I use PetscScalar to specify a complex and PetscReal for a double, > 1) is it wrong? Matrices and vectors have entries of type PetscScalar (which is complex if compiled --with-scalar-type=complex). When solving a complex problem, there will be certain values (e.g. coordinates and parameters) which are always real. This is the main purpose of PetscReal. You can't make vectors of type PetscReal. > 2) is there a constant equal to the imaginary unit? PETSC_i Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From bruno.zerbo at gmail.com Mon Sep 29 05:35:51 2008 From: bruno.zerbo at gmail.com (bruno.zerbo at gmail.com) Date: Mon, 29 Sep 2008 12:35:51 +0200 Subject: complex support In-Reply-To: <20080929102348.GX7854@brakk.ethz.ch> References: <59554F43-40A1-4BCC-BFBF-1294261E8850@gmail.com> <200809291156.44865.bruno.zerbo@gmail.com> <20080929102348.GX7854@brakk.ethz.ch> Message-ID: <200809291235.51913.bruno.zerbo@gmail.com> Now I understand, thank you Bruno Il Monday 29 September 2008 12:23:48 Jed Brown ha scritto: > On Mon 2008-09-29 11:56, bruno.zerbo at gmail.com wrote: > > Hi, > > I use PetscScalar to specify a complex and PetscReal for a double, > > 1) is it wrong? > > Matrices and vectors have entries of type PetscScalar (which is complex > if compiled --with-scalar-type=complex). When solving a complex > problem, there will be certain values (e.g. coordinates and parameters) > which are always real. This is the main purpose of PetscReal. You > can't make vectors of type PetscReal. > > > 2) is there a constant equal to the imaginary unit? > > PETSC_i > > Jed From TMO at htri.net Mon Sep 29 08:15:26 2008 From: TMO at htri.net (Thomas M. Ortiz) Date: Mon, 29 Sep 2008 08:15:26 -0500 Subject: viewer interfaces between PETSc and Matlab clones In-Reply-To: References: <24DC8EF59D8E3A439DDA670D508C601A0601DA805A@HTRIMBX.HTRI.net> Message-ID: <24DC8EF59D8E3A439DDA670D508C601A0601DA8145@HTRIMBX.HTRI.net> They will be both 2D and 3D. I'll look into your suggestions. Thanks. -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Matthew Knepley Sent: Friday, September 26, 2008 12:48 PM To: petsc-users at mcs.anl.gov Subject: Re: viewer interfaces between PETSc and Matlab clones If your graphics are 2D, I would recommend petsc4py and matplotlib. Matt On Fri, Sep 26, 2008 at 12:40 PM, Thomas M. Ortiz wrote: > Hello > > > > I would like to start using PETSc in a software package I'm developing and, > when considering results presentation, have considered making use of either > GNU Octave or SciLab as a viewer/post-processor. I read in the PETSc User's > Manual that there is a Matlab interface which can generate Matlab-compatible > representations of matrices and vectors for viewing purposes and even launch > Matlab sessions. > > > > I also believe the latest version of GNU Octave has support for Matlab > graphics. > > > > Has anyone had experience rendering PETSc results in any of these packages > who could offer any advice? Is there a PETSc API with which I could develop > a tool to launch Octave or SciLab from PETSc to mirror what is possible with > Matlab? > > > > Thanks in advance > > > > Tom Ortiz > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From dalcinl at gmail.com Mon Sep 29 10:44:38 2008 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Mon, 29 Sep 2008 12:44:38 -0300 Subject: viewer interfaces between PETSc and Matlab clones In-Reply-To: <24DC8EF59D8E3A439DDA670D508C601A0601DA8145@HTRIMBX.HTRI.net> References: <24DC8EF59D8E3A439DDA670D508C601A0601DA805A@HTRIMBX.HTRI.net> <24DC8EF59D8E3A439DDA670D508C601A0601DA8145@HTRIMBX.HTRI.net> Message-ID: However, not that using petsc4py+matplotlib from Python will not fill your needs for 3D visualization. For (really good) 3D visualisation scripting in Python, you have MayaVi-2. A powerfull 3D alternative would be ParaView and its support for Python scripting, Moreover, as ParaView is supports MPI, you could even use mpi4py to do MPI communication between your computing application (witten in C/C++/Fortran) and a (possibly distributed) ParaView engine managed through a Python scrit. I never tried this, but with some (perhaps hard) work, the final result could be really awesome. On Mon, Sep 29, 2008 at 10:15 AM, Thomas M. Ortiz wrote: > They will be both 2D and 3D. I'll look into your suggestions. Thanks. > > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Matthew Knepley > Sent: Friday, September 26, 2008 12:48 PM > To: petsc-users at mcs.anl.gov > Subject: Re: viewer interfaces between PETSc and Matlab clones > > If your graphics are 2D, I would recommend petsc4py and matplotlib. > > Matt > > On Fri, Sep 26, 2008 at 12:40 PM, Thomas M. Ortiz wrote: >> Hello >> >> >> >> I would like to start using PETSc in a software package I'm developing and, >> when considering results presentation, have considered making use of either >> GNU Octave or SciLab as a viewer/post-processor. I read in the PETSc User's >> Manual that there is a Matlab interface which can generate Matlab-compatible >> representations of matrices and vectors for viewing purposes and even launch >> Matlab sessions. >> >> >> >> I also believe the latest version of GNU Octave has support for Matlab >> graphics. >> >> >> >> Has anyone had experience rendering PETSc results in any of these packages >> who could offer any advice? Is there a PETSc API with which I could develop >> a tool to launch Octave or SciLab from PETSc to mirror what is possible with >> Matlab? >> >> >> >> Thanks in advance >> >> >> >> Tom Ortiz >> >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From TMO at htri.net Mon Sep 29 11:37:36 2008 From: TMO at htri.net (Thomas M. Ortiz) Date: Mon, 29 Sep 2008 11:37:36 -0500 Subject: viewer interfaces between PETSc and Matlab clones In-Reply-To: References: <24DC8EF59D8E3A439DDA670D508C601A0601DA805A@HTRIMBX.HTRI.net> <24DC8EF59D8E3A439DDA670D508C601A0601DA8145@HTRIMBX.HTRI.net> Message-ID: <24DC8EF59D8E3A439DDA670D508C601A0601DA81A4@HTRIMBX.HTRI.net> One important (nontechnical) feature my solution will have to implement is the ability to redistribute this visualization capability to commercial customers at a reasonable cost. Thanks for the additional suggestions. -----Original Message----- From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Lisandro Dalcin Sent: Monday, September 29, 2008 10:45 AM To: petsc-users at mcs.anl.gov Subject: Re: viewer interfaces between PETSc and Matlab clones However, not that using petsc4py+matplotlib from Python will not fill your needs for 3D visualization. For (really good) 3D visualisation scripting in Python, you have MayaVi-2. A powerfull 3D alternative would be ParaView and its support for Python scripting, Moreover, as ParaView is supports MPI, you could even use mpi4py to do MPI communication between your computing application (witten in C/C++/Fortran) and a (possibly distributed) ParaView engine managed through a Python scrit. I never tried this, but with some (perhaps hard) work, the final result could be really awesome. On Mon, Sep 29, 2008 at 10:15 AM, Thomas M. Ortiz wrote: > They will be both 2D and 3D. I'll look into your suggestions. Thanks. > > -----Original Message----- > From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On Behalf Of Matthew Knepley > Sent: Friday, September 26, 2008 12:48 PM > To: petsc-users at mcs.anl.gov > Subject: Re: viewer interfaces between PETSc and Matlab clones > > If your graphics are 2D, I would recommend petsc4py and matplotlib. > > Matt > > On Fri, Sep 26, 2008 at 12:40 PM, Thomas M. Ortiz wrote: >> Hello >> >> >> >> I would like to start using PETSc in a software package I'm developing and, >> when considering results presentation, have considered making use of either >> GNU Octave or SciLab as a viewer/post-processor. I read in the PETSc User's >> Manual that there is a Matlab interface which can generate Matlab-compatible >> representations of matrices and vectors for viewing purposes and even launch >> Matlab sessions. >> >> >> >> I also believe the latest version of GNU Octave has support for Matlab >> graphics. >> >> >> >> Has anyone had experience rendering PETSc results in any of these packages >> who could offer any advice? Is there a PETSc API with which I could develop >> a tool to launch Octave or SciLab from PETSc to mirror what is possible with >> Matlab? >> >> >> >> Thanks in advance >> >> >> >> Tom Ortiz >> >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > -- Lisandro Dalc?n --------------- Centro Internacional de M?todos Computacionales en Ingenier?a (CIMEC) Instituto de Desarrollo Tecnol?gico para la Industria Qu?mica (INTEC) Consejo Nacional de Investigaciones Cient?ficas y T?cnicas (CONICET) PTLC - G?emes 3450, (3000) Santa Fe, Argentina Tel/Fax: +54-(0)342-451.1594 From recrusader at gmail.com Mon Sep 29 13:33:18 2008 From: recrusader at gmail.com (Yujie) Date: Mon, 29 Sep 2008 11:33:18 -0700 Subject: about ISCreateGeneral() and MatGetSubMatrices() Message-ID: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> I am wondering how to use ISCreateGeneral(). Assuming I use 2cpus when I run my codes. I use MPI_COMM to create an object of IS using ISCreateGeneral() and an array on a single cpu. how to distribute this IS object on these 2cpus? If I use MatGetSubMatrices() to get different submatrices on different cpus. I am wondering how to create IS object, using PETSC_COMM_SELF? thanks a lot. Regards, Yujie -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Sep 29 13:40:48 2008 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 29 Sep 2008 13:40:48 -0500 Subject: about ISCreateGeneral() and MatGetSubMatrices() In-Reply-To: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> Message-ID: On Mon, Sep 29, 2008 at 1:33 PM, Yujie wrote: > I am wondering how to use ISCreateGeneral(). > > Assuming I use 2cpus when I run my codes. I use MPI_COMM to create an object > of IS using ISCreateGeneral() and an array on a single cpu. how to > distribute this IS object on these 2cpus? If I use MatGetSubMatrices() to > get different submatrices on different cpus. I am wondering how to create IS > object, using PETSC_COMM_SELF? thanks a lot. You provide the indices you want for each process \emph{locally} on that process. There is no facility for communicating indices in IS. Matt > Regards, > > Yujie -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From recrusader at gmail.com Mon Sep 29 13:53:14 2008 From: recrusader at gmail.com (Yujie) Date: Mon, 29 Sep 2008 11:53:14 -0700 Subject: about ISCreateGeneral() and MatGetSubMatrices() In-Reply-To: References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> Message-ID: <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> that is, even if one use MPI_COMM to create an IS object. it is always a sequential object, right? thanks, Matt. On Mon, Sep 29, 2008 at 11:40 AM, Matthew Knepley wrote: > On Mon, Sep 29, 2008 at 1:33 PM, Yujie wrote: > > I am wondering how to use ISCreateGeneral(). > > > > Assuming I use 2cpus when I run my codes. I use MPI_COMM to create an > object > > of IS using ISCreateGeneral() and an array on a single cpu. how to > > distribute this IS object on these 2cpus? If I use MatGetSubMatrices() to > > get different submatrices on different cpus. I am wondering how to create > IS > > object, using PETSC_COMM_SELF? thanks a lot. > > You provide the indices you want for each process \emph{locally} on > that process. > There is no facility for communicating indices in IS. > > Matt > > > Regards, > > > > Yujie > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Sep 29 14:10:28 2008 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 29 Sep 2008 14:10:28 -0500 Subject: about ISCreateGeneral() and MatGetSubMatrices() In-Reply-To: <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> References: <7ff0ee010809291133u7afb3e0hdb72cac7d68e58e8@mail.gmail.com> <7ff0ee010809291153t7ded6b75udde2aa8707d0a408@mail.gmail.com> Message-ID: On Mon, Sep 29, 2008 at 1:53 PM, Yujie wrote: > that is, even if one use MPI_COMM to create an IS object. it is always a > sequential object, right? thanks, Matt. It has some collective operations, just not the one you were asking for. Matt > On Mon, Sep 29, 2008 at 11:40 AM, Matthew Knepley wrote: >> >> On Mon, Sep 29, 2008 at 1:33 PM, Yujie wrote: >> > I am wondering how to use ISCreateGeneral(). >> > >> > Assuming I use 2cpus when I run my codes. I use MPI_COMM to create an >> > object >> > of IS using ISCreateGeneral() and an array on a single cpu. how to >> > distribute this IS object on these 2cpus? If I use MatGetSubMatrices() >> > to >> > get different submatrices on different cpus. I am wondering how to >> > create IS >> > object, using PETSC_COMM_SELF? thanks a lot. >> >> You provide the indices you want for each process \emph{locally} on >> that process. >> There is no facility for communicating indices in IS. >> >> Matt >> >> > Regards, >> > >> > Yujie >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener