[petsc-users] Different solution while running in parallel
Matthew Knepley
knepley at gmail.com
Sun Nov 27 16:43:00 CST 2022
On Thu, Nov 17, 2022 at 2:34 PM Karthikeyan Chockalingam - STFC UKRI <
karthikeyan.chockalingam at stfc.ac.uk> wrote:
> I have tried and not sure how to set "-pc_svd_monitor” since, I have not
> yet setup the command line option. (I am currently using PETSc within
> another framework).
>
Sorry this email got lost for me. You just call
PetscOptionsSetValue(NULL, "-pc_svd_monitor", "1");
>
>
KSP ksp;
>
> PC pc;
>
> KSPCreate(PETSC_COMM_WORLD, &ksp);
>
> KSPSetOperators(ksp, A, A);
>
>
>
> ierr = KSPSetType(ksp,KSPPREONLY);CHKERRQ(ierr);
>
> ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr);
>
> ierr = PCSetType(pc,PCSVD);CHKERRQ(ierr);
>
> ierr = PetscOptionsSetValue(*NULL*,"-pc_svd_monitor", *NULL*); CHKERRQ(ierr);
> -- is this right?
>
> ierr = PCFactorSetMatSolverType(pc,MATSOLVERMUMPS);CHKERRQ(ierr);
>
> ierr = KSPView(ksp, PETSC_VIEWER_STDOUT_SELF);CHKERRQ(ierr);
>
>
>
> KSPSolve(ksp, b, x);
>
>
>
> ierr = KSPMonitorTrueResidual(ksp,its,norm,vf);CHKERRQ(ierr);
>
> ierr = KSPMonitorSingularValue(ksp, its, norm, vf);CHKERRQ(ierr);
>
>
>
>
>
> Yes, now using PCSVD both the serial and parallel version produce the same
> result.
>
>
>
> (i) What does this imply?
>
Your matrix is singular
> (ii) Would I be able to solve using CG preconditioned
> using Hypre as I scale the problem?
>
Only if Hypre uses SVD on the coarse grid. I am not sure they can do that.
> (iii) I have not built PETSc with SLEPc – can I still use
> PCSVD?
>
Yes.
> (iv) Can I set ksp type, pc type, ksp monitor etc using
> PETScOptionsSetValue instead of code? In that case how would the above code
> translate to? That will be very helpful.
>
Yes, you can use SetValue(). I do not understand the rest of the question.
Thanks,
Matt
> Many thanks.
>
>
>
> Best,
>
> Karthik.
>
>
>
> *From: *Matthew Knepley <knepley at gmail.com>
> *Date: *Thursday, 17 November 2022 at 17:48
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalingam at stfc.ac.uk>
> *Cc: *Zhang, Hong <hzhang at mcs.anl.gov>, petsc-users at mcs.anl.gov <
> petsc-users at mcs.anl.gov>
> *Subject: *Re: [petsc-users] Different solution while running in parallel
>
> On Thu, Nov 17, 2022 at 11:13 AM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalingam at stfc.ac.uk> wrote:
>
> Hi Matt,
>
>
>
> I tested two sizes manually for the Poisson problem with homogenous
> Dirichlet boundary conditions (on all nodes on the boundary) and they both
> produced the right result when run serially using PCLU
>
>
>
> 1. 2 elements x 2 elements (total nodes 9 but 1 dof)
>
> A = 10.6667 b = 4 x = 0.375
>
> 1. 3 elements x 3 elements (total nodes 16 but 4 dof)
>
> A = 10.6667 -1.33333 -1.33333 -1.33333
>
> -1.33333 10.6667 -1.33333 -1.33333
>
> -1.33333 -1.33333 10.6667 -1.33333
>
> -1.33333 -1.33333 -1.33333 10.6667
>
>
>
> b = {4 4 4 4}^T
>
> x = (0.6 0.6 0.6 0.6)
>
>
>
> Since, it is solvable not sure if the system can be singular?
>
>
>
> I have attached the runs for case (2) run on one and two cores. Parallel
> run produces zero vector for x.
>
>
>
> I used MatZeroRowsColumns to set the Dirichlet boundary conditions by
> zeroing the entries in the matrix corresponding to the boundary nodes.
>
>
>
>
>
> Please please please run the original thing with the options I suggested:
>
>
>
> -pc_type svd -pc_svd_monitor
>
>
>
> This will print out all the singular values of the matrix and solve it
> using SVD.
>
>
>
> Thanks,
>
>
>
> Matt
>
>
>
> Best,
>
> Karthik.
>
>
>
>
>
>
>
> *From: *Matthew Knepley <knepley at gmail.com>
> *Date: *Thursday, 17 November 2022 at 15:16
> *To: *Chockalingam, Karthikeyan (STFC,DL,HC) <
> karthikeyan.chockalingam at stfc.ac.uk>
> *Cc: *Zhang, Hong <hzhang at mcs.anl.gov>, petsc-users at mcs.anl.gov <
> petsc-users at mcs.anl.gov>
> *Subject: *Re: [petsc-users] Different solution while running in parallel
>
> Using options instead of code will make your life much easier.
>
>
>
> Two thing are wrong here:
>
>
>
> 1) Your solver is doing no iterates because the initial residual is very
> small, 5.493080158227e-15. The LU does not matter.
>
> In order to check the condition number of your system, run with
> -pc_type svd -pc_svd_monitor
>
>
>
> 2) Your parallel run also does no iterates
>
>
>
> 0 KSP none resid norm 6.951601853367e-310 true resid norm
> 1.058300524426e+01 ||r(i)||/||b|| 8.819171036882e-01
>
>
>
> but the true residual is not small. That means that your system is
> singular, but you have given a consistent RHS.
>
>
>
> Thanks,
>
>
>
> Matt
>
>
>
> On Thu, Nov 17, 2022 at 9:37 AM Karthikeyan Chockalingam - STFC UKRI <
> karthikeyan.chockalingam at stfc.ac.uk> wrote:
>
> Hi Matt and Hong,
>
>
>
> Thank you for your response.
>
> I made the following changes, to get the desired output
>
>
>
> PetscReal norm; /* norm of solution error */
>
> PetscInt its;
>
> KSPConvergedReason reason;
>
> PetscViewerAndFormat *vf;
>
> PetscViewerAndFormatCreate(PETSC_VIEWER_STDOUT_WORLD,
> PETSC_VIEWER_DEFAULT, &vf);
>
>
>
> ierr = KSPView(ksp, PETSC_VIEWER_STDOUT_SELF);CHKERRQ(ierr);
>
>
>
> KSPSolve(ksp, b, x);
>
>
>
> ierr = KSPMonitorTrueResidual(ksp,its,norm,vf);CHKERRQ(ierr);
>
> ierr = KSPMonitorSingularValue(ksp, its, norm, vf);CHKERRQ(ierr);
>
>
>
> I have attached the outputs from both the runs. As before, I am also
> printing A, b, and x.
>
>
>
> I wonder if it is a memory issue related to mpi library employed. I am
> currently using openmpi – should I instead use mpich?
>
>
>
> Kind regards,
>
> Karthik.
>
>
>
> *From: *Matthew Knepley <knepley at gmail.com>
> *Date: *Thursday, 17 November 2022 at 12:19
> *To: *Zhang, Hong <hzhang at mcs.anl.gov>
> *Cc: *petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>, Chockalingam,
> Karthikeyan (STFC,DL,HC) <karthikeyan.chockalingam at stfc.ac.uk>
> *Subject: *Re: [petsc-users] Different solution while running in parallel
>
> On Wed, Nov 16, 2022 at 9:07 PM Zhang, Hong via petsc-users <
> petsc-users at mcs.anl.gov> wrote:
>
> Karhik,
>
> Can you find out the condition number of your matrix?
>
>
>
> Also, run using
>
>
>
> -ksp_view -ksp_monitor_true_residual -ksp_converged_reason
>
>
>
> and send the two outputs.
>
>
>
> Thanks,
>
>
>
> Matt
>
>
>
> Hong
>
>
> ------------------------------
>
> *From:* petsc-users <petsc-users-bounces at mcs.anl.gov> on behalf of
> Karthikeyan Chockalingam - STFC UKRI via petsc-users <
> petsc-users at mcs.anl.gov>
> *Sent:* Wednesday, November 16, 2022 6:04 PM
> *To:* petsc-users at mcs.anl.gov <petsc-users at mcs.anl.gov>
> *Subject:* [petsc-users] Different solution while running in parallel
>
>
>
> Hello,
>
>
>
> I tried to solve a (FE discretized) Poisson equation using PCLU. For
> some reason I am getting different solutions while running the problem on
> one and two cores. I have attached the output file (out.txt) from both the
> runs. I am printing A, b and x from both the runs – while A and b are the
> same but the solution seems is different.
>
>
>
> I am not sure what I doing wrong.
>
>
>
> Below is my matrix, vector, and solve setup.
>
>
>
>
>
> Mat A;
>
> Vec b, x;
>
>
>
> ierr = MatCreate(PETSC_COMM_WORLD, &A); CHKERRQ(ierr);
>
> ierr = MatSetType(A, MATMPIAIJ); CHKERRQ(ierr);
>
> ierr = MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, N, N); CHKERRQ
> (ierr);
>
> ierr = MatMPIAIJSetPreallocation(A,d_nz, *NULL*, o_nz, *NULL*);
> CHKERRQ(ierr);
>
> ierr = MatSetOption(A,MAT_SYMMETRIC,PETSC_TRUE); CHKERRQ(ierr);
>
> ierr = MatCreateVecs(A, &b, &x); CHKERRQ(ierr);
>
>
>
> KSP ksp;
>
> PC pc;
>
> KSPCreate(PETSC_COMM_WORLD, &ksp);
>
> KSPSetOperators(ksp, A, A);
>
> ierr = KSPSetType(ksp,KSPPREONLY);CHKERRQ(ierr);
>
> ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr);
>
> ierr = PCSetType(pc,PCLU);CHKERRQ(ierr);
>
> ierr = PCFactorSetMatSolverType(pc,MATSOLVERMUMPS);CHKERRQ(ierr);
>
> KSPSolve(ksp, b, x);
>
>
>
> Thank you for your help.
>
>
>
> Karhik.
>
>
>
> This email and any attachments are intended solely for the use of the
> named recipients. If you are not the intended recipient you must not use,
> disclose, copy or distribute this email or any of its attachments and
> should notify the sender immediately and delete this email from your
> system. UK Research and Innovation (UKRI) has taken every reasonable
> precaution to minimise risk of this email or any attachments containing
> viruses or malware but the recipient should carry out its own virus and
> malware checks before opening the attachments. UKRI does not accept any
> liability for any losses or damages which the recipient may sustain due to
> presence of any viruses.
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20221127/13270bf1/attachment-0001.html>
More information about the petsc-users
mailing list