[petsc-users] how to update the matrix in KSP

Gu Shiyuan gshy2014 at gmail.com
Wed Mar 16 23:16:56 CDT 2011


Hi,
    I want to solve a linear eq. Kx=b where K is changed at each time step
and the preconditioner is a shell function and doesn't change. After I
change K, do I need to explicitly call KSP/PCSetOperator again to reset the
matrix inside the KSP solver?
Does KSP hold copies of the matrices or just pointers to the matrices?
i.e.,

PCSetOperator(pc,K,K,SAME_PRECONDITIONER);
 ..............///other setup
KSPSolve(..);
...........////change K;
/////////Is  PCSetOperator(......) needed here?

And what about if I want to change the preconditioner? do I need to destroy
the PC/KSP object and re-create and setup everything? or KSP/PCSetOperator
will release the memory space associated with the old matrices(e.g., ILU
factors)? Thanks.



On Wed, Mar 16, 2011 at 12:00 PM, <petsc-users-request at mcs.anl.gov> wrote:

> Send petsc-users mailing list submissions to
>        petsc-users at mcs.anl.gov
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        https://lists.mcs.anl.gov/mailman/listinfo/petsc-users
> or, via email, send a message with subject or body 'help' to
>        petsc-users-request at mcs.anl.gov
>
> You can reach the person managing the list at
>        petsc-users-owner at mcs.anl.gov
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of petsc-users digest..."
>
>
> Today's Topics:
>
>   1. Re:  Building with MKL 10.3 (Preston, Eric - IS)
>   2. Re:  Building with MKL 10.3 (Robert Ellis)
>   3. Re:  Building with MKL 10.3 (Jed Brown)
>   4. Re:  Building with MKL 10.3 (Natarajan CS)
>   5. Re:  Building with MKL 10.3 (Rob Ellis)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 15 Mar 2011 16:55:25 -0400
> From: "Preston, Eric - IS" <Eric.Preston at itt.com>
> Subject: Re: [petsc-users] Building with MKL 10.3
> To: "petsc-users at mcs.anl.gov" <petsc-users at mcs.anl.gov>
> Message-ID:
>        <A0853E30B604B04CABC1DDDF34DD586F0984FF9FC2 at USFWA1EXMBX5.itt.net>
> Content-Type: text/plain; charset="us-ascii"
>
>
> As the original poster, I can say I didn't give much thought to using the
> threaded vs. non-threaded MKL libraries, I was just sharing a solution to
> building with a different MKL variant (which should probably be incorporated
> into the config scripts.) However, the discussion it spawned was
> informative.  The main consideration there was not performance but
> compatibility with your other libraries. I know on windows, for instance,
> different run-time libraries are used in each case and all your components
> must be compiled with the same option. Thankfully, I'm not using windows for
> this project, so it might not make any difference.
>
> Eric Preston
>
>
> > Hello,
> >    I am neither a regular Petsc user nor contributor so preemptive
> >apologies if I am completely off the line here.
> >
> >I am not sure if the original poster had hyper-threading in mind when he
> >asked about multi-threading, in case that was the idea, I don't think
> using
> >petsc with MKL (HT) is going to give any benefit, I don't think MKL is
> >really resource insensitive.
> >
> >Also I wonder what percentage of the code is actually blas/lapack
> intensive
> >to make any significant dent in wall cock?
> >
> >of course +1 to everything else posed above.
> >
> >Cheers,
> >
> >C.S.N
>
>
> This e-mail and any files transmitted with it may be proprietary and are
> intended solely for the use of the individual or entity to whom they are
> addressed. If you have received this e-mail in error please notify the
> sender.
> Please note that any views or opinions presented in this e-mail are solely
> those of the author and do not necessarily represent those of ITT
> Corporation. The recipient should check this e-mail and any attachments for
> the presence of viruses. ITT accepts no liability for any damage caused by
> any virus transmitted by this e-mail.
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 15 Mar 2011 21:30:41 +0000
> From: Robert Ellis <Robert.Ellis at geosoft.com>
> Subject: Re: [petsc-users] Building with MKL 10.3
> To: PETSc users list <petsc-users at mcs.anl.gov>
> Message-ID:
>        <18205E5ECD2A1A4584F2BFC0BCBDE95526D59A46 at exchange.geosoft.com>
> Content-Type: text/plain; charset="us-ascii"
>
> All,
>
> Coincidentally, I have just spent much of the last two weeks testing the
> effect of the latest MKL blas-lapack release on a large domain decomposition
> style PETSc application, MPICH2, Windows, Intel Fortran, VS 2010 C++, all
> static linked, no hyperthreading, multi-blade, latest dual hex core Xeons,
> 25GB ram. Each core contains a domain and a shell is used for the matrix
> operations. Regardless of setting the number of threads for MKL or OMP, the
> MKL performance was worse than simply using --download-f-blas-lapack=1. My
> interpretation is that the decomposition of one domain per core effectively
> saturates the hardware and performance is actually degraded using the more
> sophisticated MKL.
>
> And, yes, I know that Windows is less than ideal for this type of work but
> there are other constraints...
>
> Cheers,
> Rob
>
>
> -----Original Message-----
> From: petsc-users-bounces at mcs.anl.gov [mailto:
> petsc-users-bounces at mcs.anl.gov] On Behalf Of Preston, Eric - IS
> Sent: Tuesday, March 15, 2011 1:55 PM
> To: petsc-users at mcs.anl.gov
> Subject: Re: [petsc-users] Building with MKL 10.3
>
>
> As the original poster, I can say I didn't give much thought to using the
> threaded vs. non-threaded MKL libraries, I was just sharing a solution to
> building with a different MKL variant (which should probably be incorporated
> into the config scripts.) However, the discussion it spawned was
> informative.  The main consideration there was not performance but
> compatibility with your other libraries. I know on windows, for instance,
> different run-time libraries are used in each case and all your components
> must be compiled with the same option. Thankfully, I'm not using windows for
> this project, so it might not make any difference.
>
> Eric Preston
>
>
> > Hello,
> >    I am neither a regular Petsc user nor contributor so preemptive
> >apologies if I am completely off the line here.
> >
> >I am not sure if the original poster had hyper-threading in mind when he
> >asked about multi-threading, in case that was the idea, I don't think
> using
> >petsc with MKL (HT) is going to give any benefit, I don't think MKL is
> >really resource insensitive.
> >
> >Also I wonder what percentage of the code is actually blas/lapack
> intensive
> >to make any significant dent in wall cock?
> >
> >of course +1 to everything else posed above.
> >
> >Cheers,
> >
> >C.S.N
>
>
> This e-mail and any files transmitted with it may be proprietary and are
> intended solely for the use of the individual or entity to whom they are
> addressed. If you have received this e-mail in error please notify the
> sender.
> Please note that any views or opinions presented in this e-mail are solely
> those of the author and do not necessarily represent those of ITT
> Corporation. The recipient should check this e-mail and any attachments for
> the presence of viruses. ITT accepts no liability for any damage caused by
> any virus transmitted by this e-mail.
>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 15 Mar 2011 22:35:31 +0100
> From: Jed Brown <jed at 59A2.org>
> Subject: Re: [petsc-users] Building with MKL 10.3
> To: PETSc users list <petsc-users at mcs.anl.gov>
> Cc: Robert Ellis <Robert.Ellis at geosoft.com>
> Message-ID:
>        <AANLkTimfHkC8yT9idcTNcv8xOB=cM0cLBi8n+0vOFAMc at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Tue, Mar 15, 2011 at 22:30, Robert Ellis <Robert.Ellis at geosoft.com
> >wrote:
>
> > Regardless of setting the number of threads for MKL or OMP, the MKL
> > performance was worse than simply using --download-f-blas-lapack=1.
>
>
> Interesting. Does this statement include using just one thread, perhaps
> with
> a non-threaded MKL? Also, when you used threading, were you putting an MPI
> process on every core or were you making sure that you had enough cores for
> num_mpi_processes * num_mkl_threads?
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110315/693c6d6c/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 4
> Date: Tue, 15 Mar 2011 17:20:25 -0500
> From: Natarajan CS <csnataraj at gmail.com>
> Subject: Re: [petsc-users] Building with MKL 10.3
> To: PETSc users list <petsc-users at mcs.anl.gov>
> Cc: Robert Ellis <Robert.Ellis at geosoft.com>
> Message-ID:
>        <AANLkTinuGyE6Yun+KpthjAvoW3VwnEtz+kjvD2JGdzS3 at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Thanks Eric and Rob.
>
> Indeed! Was MKL_DYNAMIC set to default (true)? It looks like using 1 thread
> per core (sequential MKL) is the right thing to do as baseline.
>  I would think that the performance of #cores =  num_mpi_processes *
> num_mkl_threads might be <= #cores = num_mpi_processes case (# cores const)
> unless some cache effects come into play (Not sure what, I would think the
> mkl installation should weed these issues out).
>
>
> P.S :
> Out of curiosity have you also tested your app on Nehalem? Any difference
> between Nehalem vs Westmere for similar bandwidth?
>
> On Tue, Mar 15, 2011 at 4:35 PM, Jed Brown <jed at 59a2.org> wrote:
>
> > On Tue, Mar 15, 2011 at 22:30, Robert Ellis <Robert.Ellis at geosoft.com
> >wrote:
> >
> >> Regardless of setting the number of threads for MKL or OMP, the MKL
> >> performance was worse than simply using --download-f-blas-lapack=1.
> >
> >
> > Interesting. Does this statement include using just one thread, perhaps
> > with a non-threaded MKL? Also, when you used threading, were you putting
> an
> > MPI process on every core or were you making sure that you had enough
> cores
> > for num_mpi_processes * num_mkl_threads?
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110315/96b06f5f/attachment-0001.htm
> >
>
> ------------------------------
>
> Message: 5
> Date: Tue, 15 Mar 2011 15:32:44 -0700
> From: "Rob Ellis" <Robert.G.Ellis at Shaw.ca>
> Subject: Re: [petsc-users] Building with MKL 10.3
> To: "'PETSc users list'" <petsc-users at mcs.anl.gov>
> Message-ID: <006301cbe360$e77b63a0$b6722ae0$@G.Ellis at Shaw.ca>
> Content-Type: text/plain; charset="us-ascii"
>
> Yes, MKL_DYNAMIC was set to true. No, I haven't tested on Nehalem. I'm
> currently comparing sequential MKL with --download-f-blas-lapack=1.
>
> Rob
>
>
>
> From: petsc-users-bounces at mcs.anl.gov
> [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Natarajan CS
> Sent: Tuesday, March 15, 2011 3:20 PM
> To: PETSc users list
> Cc: Robert Ellis
> Subject: Re: [petsc-users] Building with MKL 10.3
>
>
>
> Thanks Eric and Rob.
>
> Indeed! Was MKL_DYNAMIC set to default (true)? It looks like using 1 thread
> per core (sequential MKL) is the right thing to do as baseline.
>  I would think that the performance of #cores =  num_mpi_processes *
> num_mkl_threads might be <= #cores = num_mpi_processes case (# cores const)
> unless some cache effects come into play (Not sure what, I would think the
> mkl installation should weed these issues out).
>
>
>
> P.S :
> Out of curiosity have you also tested your app on Nehalem? Any difference
> between Nehalem vs Westmere for similar bandwidth?
>
> On Tue, Mar 15, 2011 at 4:35 PM, Jed Brown <jed at 59a2.org> wrote:
>
> On Tue, Mar 15, 2011 at 22:30, Robert Ellis <Robert.Ellis at geosoft.com>
> wrote:
>
> Regardless of setting the number of threads for MKL or OMP, the MKL
> performance was worse than simply using --download-f-blas-lapack=1.
>
>
>
> Interesting. Does this statement include using just one thread, perhaps
> with
> a non-threaded MKL? Also, when you used threading, were you putting an MPI
> process on every core or were you making sure that you had enough cores for
> num_mpi_processes * num_mkl_threads?
>
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110315/d848575a/attachment-0001.htm
> >
>
> ------------------------------
>
> _______________________________________________
> petsc-users mailing list
> petsc-users at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users
>
>
> End of petsc-users Digest, Vol 27, Issue 33
> *******************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110316/0e9cba01/attachment-0001.htm>


More information about the petsc-users mailing list