From knepley at gmail.com Wed Mar 1 09:25:47 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 1 Mar 2017 09:25:47 -0600 Subject: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn In-Reply-To: <350529B93F4E2F4497FD8DE4E86E84AA16F1DC6F@AUS1EXMBX04.ioinc.ioroot.tld> References: <350529B93F4E2F4497FD8DE4E86E84AA16F1DC6F@AUS1EXMBX04.ioinc.ioroot.tld> Message-ID: On Tue, Feb 28, 2017 at 6:17 PM, Rodrigo Felicio wrote: > Dear All, > > I am new to both pestc (petsc4py) and MPI. I am prototyping an > application that acts on data belonging to thousands of different spatial > locations, so I decided to use mpi4py to parallelize my computations over > said set of points. At each point, however, I have to solve linear > systems of the type A x = b, using least-squares. As matrices A are sparse > and large, I would like use petsc LSQR solver to obtain x. Trouble is that > I cannot get mpi4py to work with petsc4py when using MPI.Spawn(), so that I > could solve those linear systems using threads spawned from my main > algorithm. I am not sure if this is a problem with my petsc/mpi > installation, or if there is some inherent incompatibility between mpi4py > and petsc4py on spawned threads. Anyway, I would be really thankful to > anyone who could shed some light on this issue. For illustration, the > child and parent codes found on the pestc4py spawning demo folder fail for > me if I just include the petsc4py initialization in either of them. In > fact, just the introduction of the line " from petsc4py import PETSc" is > enough to make the code to hang and issue, upon keyboard termination, error > msgs of the type "spawned process group was unable to connect back to > parent port" . > MPI is really an SPMD framework, so it does not make sense to me that you separate the codes here. Normally you would designate one rank (say 0) to be the master, and then the others are the children. Also, startup is collective, so everyone must call it at the same time. Matt > Kind regards > Rodrigo > > ------------------------ > CHILD code: > ------------------------- > > # inclusion of petsc4py lines causes code to hang > import petsc4py > petsc4py.init(sys.argv) > from petsc4py import PETSc > from mpi4py import MPI > from array import array > > master = MPI.Comm.Get_parent() > nprocs = master.Get_size() > myrank = master.Get_rank() > > n = array('i', [0]) > master.Bcast([n, MPI.INT], root=0) > n = n[0] > > h = 1.0 / n > s = 0.0 > for i in range(myrank+1, n+1, nprocs): > x = h * (i - 0.5) > s += 4.0 / (1.0 + x**2) > pi = s * h > > pi = array('d', [pi]) > master.Reduce(sendbuf=[pi, MPI.DOUBLE], > recvbuf=None, > op=MPI.SUM, root=0) > > master.Disconnect() > > > --------------------- > MASTER CODE: > ---------------------- > from mpi4py import MPI > from array import array > from math import pi as PI > from sys import argv > > cmd = 'cpi-worker-py.exe' > if len(argv) > 1: cmd = argv[1] > print("%s -> %s" % (argv[0], cmd)) > > worker = MPI.COMM_SELF.Spawn(cmd, None, 5) > > n = array('i', [100]) > worker.Bcast([n,MPI.INT], root=MPI.ROOT) > > pi = array('d', [0.0]) > worker.Reduce(sendbuf=None, > recvbuf=[pi, MPI.DOUBLE], > op=MPI.SUM, root=MPI.ROOT) > pi = pi[0] > > worker.Disconnect() > > print('pi: %.16f, error: %.16f' % (pi, abs(PI-pi))) > > > > > > ________________________________ > > > This email and any files transmitted with it are confidential and are > intended solely for the use of the individual or entity to whom they are > addressed. If you are not the original recipient or the person responsible > for delivering the email to the intended recipient, be advised that you > have received this email in error, and that any use, dissemination, > forwarding, printing, or copying of this email is strictly prohibited. If > you received this email in error, please immediately notify the sender and > delete the original. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Mar 1 10:40:46 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 1 Mar 2017 10:40:46 -0600 Subject: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn In-Reply-To: <350529B93F4E2F4497FD8DE4E86E84AA16F1DC6F@AUS1EXMBX04.ioinc.ioroot.tld> References: <350529B93F4E2F4497FD8DE4E86E84AA16F1DC6F@AUS1EXMBX04.ioinc.ioroot.tld> Message-ID: <13D58838-9BD1-498D-8E7D-12FCFFE80957@mcs.anl.gov> You need to send all error messages (cut and paste) so people can see what has gone wrong. It certainly should work if the installs and environment are correct (PYTHONPATH is set?). > On Feb 28, 2017, at 6:17 PM, Rodrigo Felicio wrote: > > Dear All, > > I am new to both pestc (petsc4py) and MPI. I am prototyping an application that acts on data belonging to thousands of different spatial locations, so I decided to use mpi4py to parallelize my computations over said set of points. At each point, however, I have to solve linear systems of the type A x = b, using least-squares. As matrices A are sparse and large, I would like use petsc LSQR solver to obtain x. Trouble is that I cannot get mpi4py to work with petsc4py when using MPI.Spawn(), so that I could solve those linear systems using threads spawned from my main algorithm. I am not sure if this is a problem with my petsc/mpi installation, or if there is some inherent incompatibility between mpi4py and petsc4py on spawned threads. Anyway, I would be really thankful to anyone who could shed some light on this issue. For illustration, the child and parent codes found on the pestc4py spawning demo folder fail for me if I just include the petsc4py initialization in either of them. In fact, just the introduction of the line " from petsc4py import PETSc" is enough to make the code to hang and issue, upon keyboard termination, error msgs of the type "spawned process group was unable to connect back to parent port" . > > Kind regards > Rodrigo > > ------------------------ > CHILD code: > ------------------------- > > # inclusion of petsc4py lines causes code to hang > import petsc4py > petsc4py.init(sys.argv) > from petsc4py import PETSc > from mpi4py import MPI > from array import array > > master = MPI.Comm.Get_parent() > nprocs = master.Get_size() > myrank = master.Get_rank() > > n = array('i', [0]) > master.Bcast([n, MPI.INT], root=0) > n = n[0] > > h = 1.0 / n > s = 0.0 > for i in range(myrank+1, n+1, nprocs): > x = h * (i - 0.5) > s += 4.0 / (1.0 + x**2) > pi = s * h > > pi = array('d', [pi]) > master.Reduce(sendbuf=[pi, MPI.DOUBLE], > recvbuf=None, > op=MPI.SUM, root=0) > > master.Disconnect() > > > --------------------- > MASTER CODE: > ---------------------- > from mpi4py import MPI > from array import array > from math import pi as PI > from sys import argv > > cmd = 'cpi-worker-py.exe' > if len(argv) > 1: cmd = argv[1] > print("%s -> %s" % (argv[0], cmd)) > > worker = MPI.COMM_SELF.Spawn(cmd, None, 5) > > n = array('i', [100]) > worker.Bcast([n,MPI.INT], root=MPI.ROOT) > > pi = array('d', [0.0]) > worker.Reduce(sendbuf=None, > recvbuf=[pi, MPI.DOUBLE], > op=MPI.SUM, root=MPI.ROOT) > pi = pi[0] > > worker.Disconnect() > > print('pi: %.16f, error: %.16f' % (pi, abs(PI-pi))) > > > > > > ________________________________ > > > This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. > From Rodrigo.Felicio at iongeo.com Wed Mar 1 11:40:36 2017 From: Rodrigo.Felicio at iongeo.com (Rodrigo Felicio) Date: Wed, 1 Mar 2017 17:40:36 +0000 Subject: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn In-Reply-To: <13D58838-9BD1-498D-8E7D-12FCFFE80957@mcs.anl.gov> References: <350529B93F4E2F4497FD8DE4E86E84AA16F1DC6F@AUS1EXMBX04.ioinc.ioroot.tld>, <13D58838-9BD1-498D-8E7D-12FCFFE80957@mcs.anl.gov> Message-ID: <350529B93F4E2F4497FD8DE4E86E84AA16F1DCE8@AUS1EXMBX04.ioinc.ioroot.tld> Thanks, Barry and Matt, for your prompt responses. I really appreciated them. Sorry I forgot to mention that I am using petsc 3.6.1, petsc4py 3.6.0 and mpi4py 2.0.0. (and also tested with mpi4py 1.2.2) from running env I get: PYTHONPATH=/home/XXXXX/Enthought/Canopy_64bit/User/lib/python2.7/site-packages:/home/XXXXX/dev/myPy:/home/XXXXX/ipnotebooks and also PETSC_DIR=/home/XXXXX/mylocal/petsc-3.6.1 PETSC_ARCH=arch-linux2-c I_MPI_ROOT=/apps/tools/centos54-x86_64-intel14//impi/4.1.3.04 Funny thing is that only inside a python session I see the directory where both petsc4py and mpi4py are installed in the python path, i.e., only after import sys sys.path I see '/home/XXXX/.local/lib/python2.7/site-packages' Anyway, after running time mpirun -n 1 python ./dyn_mem_ex.py I get the error msg (once I killed the process running dyn_mem_ex.py. Mind you I edited out the the IP of the machine with XXX.XX.XX.XX) ===================================================================================== = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES = EXIT CODE: 15 = CLEANING UP REMAINING PROCESSES = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES ===================================================================================== Fatal error in PMPI_Init_thread: Invalid port, error stack: MPIR_Init_thread(674).....: MPID_Init(320)............: spawned process group was unable to connect back to the parent on port MPID_Comm_connect(206)....: MPIDI_Comm_connect(579)...: Named port tag#0$description#v1n29$port#51319$ifname#XXX.XX.XX.XX$ does not exist MPIDI_Comm_connect(415)...: dequeue_and_set_error(628): Communication error with rank 0 Fatal error in PMPI_Init_thread: Invalid port, error stack: MPIR_Init_thread(674)..: MPID_Init(320).........: spawned process group was unable to connect back to the parent on port MPID_Comm_connect(206).: MPIDI_Comm_connect(432): Named port tag#0$description#v1n29$port#51319$ifname#XXX.XX.XX.XX$ does not exist Fatal error in PMPI_Init_thread: Invalid port, error stack: MPIR_Init_thread(674)..: MPID_Init(320).........: spawned process group was unable to connect back to the parent on port MPID_Comm_connect(206).: MPIDI_Comm_connect(432): Named port tag#0$description#v1n29$port#51319$ifname#XXX.XX.XX.XX$ does not exist Fatal error in PMPI_Init_thread: Invalid port, error stack: MPIR_Init_thread(674)..: MPID_Init(320).........: spawned process group was unable to connect back to the parent on port MPID_Comm_connect(206).: MPIDI_Comm_connect(432): Named port tag#0$description#v1n29$port#51319$ifname#XXX.XX.XX.XX$ does not exist APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15) real 0m9.987s user 0m36.714s sys 0m11.541s if I comment out the lines that import PETSc and initialize petsc4py on the code for the cpi.py (codes attached below) import pestc4py #petsc4py.init(sys.argv) #from petsc4py import PETSc from mpi4py import MPI Then it runs without problems and the output is time mpirun -n 1 python ./dyn_mem_ex.py proc 2 of 4 proc 3 of 4 proc 1 of 4 proc 0 of 4 proc 0 of 4, Adim=[10] proc 1 of 4, Adim=[10] proc 2 of 4, Adim=[10] proc 3 of 4, Adim=[10] Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] 3.14160098692 2.65258238441e-06 real 0m0.535s user 0m0.431s sys 0m0.633s the codes that I used in this example, have just minor modifications compared to old examples from petsc4py. For reference, I am also attaching them #--------------------------------- # dyn_mem_ex.py # ------------------------------------ import numpy import sys import petsc4py #petsc4py.init(sys.argv) #from petsc4py import PETSc from mpi4py import MPI mypath = '/home/XXX/study/mpi4py/' comm = MPI.COMM_SELF.Spawn(sys.executable, args=[mypath + 'cpi.py'], maxprocs=4) N = numpy.array(100, 'i') Adata = numpy.array(numpy.arange(10), dtype='f') Adim = numpy.array(Adata.shape[0], dtype='i') comm.Bcast([N, MPI.INT], root=MPI.ROOT) comm.Bcast([Adim, MPI.INT], root=MPI.ROOT) comm.Bcast([Adata, MPI.FLOAT], root=MPI.ROOT) PI = numpy.array(0.0, 'd') comm.Reduce(None, [PI, MPI.DOUBLE], op=MPI.SUM, root=MPI.ROOT) print(PI) print(PI/numpy.pi - 1.0) comm.Disconnect() #--------------------------------- # cpi.py # ------------------------------------ import numpy import sys, petsc4py #petsc4py.init(sys.argv) #from petsc4py import PETSc from mpi4py import MPI parent = MPI.Comm.Get_parent() size = parent.Get_size() rank = parent.Get_rank() print("proc {} of {} ".format(rank, size)) N = numpy.array(0, dtype='i') Adim = numpy.zeros(1, dtype='i') parent.Bcast([N, MPI.INT], root=0) parent.Bcast([Adim, MPI.INT], root=0) Adata = numpy.zeros(Adim[0], dtype='f') parent.Bcast([Adata, MPI.FLOAT],root=0) print("proc {} of {}, Adim={}".format(rank, size, Adim)) print("Adata = {}".format(Adata)) h = 1.0 / N; s = 0.0 for i in range(rank, N, size): x = h * (i + 0.5) #print(rank,size,Adim.shape) s += 4.0 / (1.0 + x**2) PI = numpy.array(s * h, dtype='d') parent.Reduce([PI, MPI.DOUBLE], None, op=MPI.SUM, root=0) parent.Disconnect() kind regards, Rodrigo ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. From lvella at gmail.com Wed Mar 1 12:59:40 2017 From: lvella at gmail.com (Lucas Clemente Vella) Date: Wed, 1 Mar 2017 15:59:40 -0300 Subject: [petsc-users] How to define blocks for PCFIELDSPLIT? Message-ID: I have a parallel AIJ matrix and I know exactly which element belongs to each one of the 4 submatrices blocks I want to use to solve the linear system. The blocks are not strided, because they have different number of elements. I understand that I must use PCFieldSplitSetIS(), since PCFieldSplitSetFields() is only for strided blocks. What I don't understand is how to create the IS structure I must pass to it. Each matrix coefficient is identified by a pair (i, j), but on IS creation functions, like ISCreateGeneral() and ISCreateBlock(), I am supposed to provide a one dimension set of indices. How does these indices relates to the matrix coefficients? Also, ISCreateGeneral() seems to create a single block, and ISCreateBlock() seems to create multiple blocks of the same size. How to create multiple blocks with different sizes? Thanks. -- Lucas Clemente Vella lvella at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Mar 1 13:18:21 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 1 Mar 2017 13:18:21 -0600 Subject: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn In-Reply-To: <350529B93F4E2F4497FD8DE4E86E84AA16F1DCE8@AUS1EXMBX04.ioinc.ioroot.tld> References: <350529B93F4E2F4497FD8DE4E86E84AA16F1DC6F@AUS1EXMBX04.ioinc.ioroot.tld> <13D58838-9BD1-498D-8E7D-12FCFFE80957@mcs.anl.gov> <350529B93F4E2F4497FD8DE4E86E84AA16F1DCE8@AUS1EXMBX04.ioinc.ioroot.tld> Message-ID: <68967805-86F0-422B-A5EA-FFC251EBB9AE@mcs.anl.gov> Try putting the import petsc4py and petsc4py init AFTER the import mpi4py > On Mar 1, 2017, at 11:40 AM, Rodrigo Felicio wrote: > > Thanks, Barry and Matt, for your prompt responses. I really appreciated them. > > Sorry I forgot to mention that I am using petsc 3.6.1, petsc4py 3.6.0 and mpi4py 2.0.0. (and also tested with mpi4py 1.2.2) > > from running env I get: > > PYTHONPATH=/home/XXXXX/Enthought/Canopy_64bit/User/lib/python2.7/site-packages:/home/XXXXX/dev/myPy:/home/XXXXX/ipnotebooks > > and also > > PETSC_DIR=/home/XXXXX/mylocal/petsc-3.6.1 > PETSC_ARCH=arch-linux2-c > I_MPI_ROOT=/apps/tools/centos54-x86_64-intel14//impi/4.1.3.04 > > > Funny thing is that only inside a python session I see the directory where both petsc4py and mpi4py are installed in the python path, i.e., > > only after > import sys > sys.path > > I see '/home/XXXX/.local/lib/python2.7/site-packages' > > > Anyway, after running > time mpirun -n 1 python ./dyn_mem_ex.py > > I get the error msg (once I killed the process running dyn_mem_ex.py. Mind you I edited out the the IP of the machine with XXX.XX.XX.XX) > ===================================================================================== > = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES > = EXIT CODE: 15 > = CLEANING UP REMAINING PROCESSES > = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES > ===================================================================================== > Fatal error in PMPI_Init_thread: Invalid port, error stack: > MPIR_Init_thread(674).....: > MPID_Init(320)............: spawned process group was unable to connect back to the parent on port > MPID_Comm_connect(206)....: > MPIDI_Comm_connect(579)...: Named port tag#0$description#v1n29$port#51319$ifname#XXX.XX.XX.XX$ does not exist > MPIDI_Comm_connect(415)...: > dequeue_and_set_error(628): Communication error with rank 0 > Fatal error in PMPI_Init_thread: Invalid port, error stack: > MPIR_Init_thread(674)..: > MPID_Init(320).........: spawned process group was unable to connect back to the parent on port > MPID_Comm_connect(206).: > MPIDI_Comm_connect(432): Named port tag#0$description#v1n29$port#51319$ifname#XXX.XX.XX.XX$ does not exist > Fatal error in PMPI_Init_thread: Invalid port, error stack: > MPIR_Init_thread(674)..: > MPID_Init(320).........: spawned process group was unable to connect back to the parent on port > MPID_Comm_connect(206).: > MPIDI_Comm_connect(432): Named port tag#0$description#v1n29$port#51319$ifname#XXX.XX.XX.XX$ does not exist > Fatal error in PMPI_Init_thread: Invalid port, error stack: > MPIR_Init_thread(674)..: > MPID_Init(320).........: spawned process group was unable to connect back to the parent on port > MPID_Comm_connect(206).: > MPIDI_Comm_connect(432): Named port tag#0$description#v1n29$port#51319$ifname#XXX.XX.XX.XX$ does not exist > APPLICATION TERMINATED WITH THE EXIT STRING: Terminated (signal 15) > > real 0m9.987s > user 0m36.714s > sys 0m11.541s > > if I comment out the lines that import PETSc and initialize petsc4py on the code for the cpi.py (codes attached below) > > import pestc4py > #petsc4py.init(sys.argv) > #from petsc4py import PETSc > from mpi4py import MPI > > Then it runs without problems and the output is > > time mpirun -n 1 python ./dyn_mem_ex.py > proc 2 of 4 > proc 3 of 4 proc 1 of 4 > > proc 0 of 4 > proc 0 of 4, Adim=[10] > proc 1 of 4, Adim=[10] > proc 2 of 4, Adim=[10] > proc 3 of 4, Adim=[10] > Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] > > Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] > > 3.14160098692 > 2.65258238441e-06 > > real 0m0.535s > user 0m0.431s > sys 0m0.633s > > > > the codes that I used in this example, have just minor modifications compared to old examples from petsc4py. For reference, I am also attaching them > > #--------------------------------- > # dyn_mem_ex.py > # ------------------------------------ > import numpy > import sys > import petsc4py > #petsc4py.init(sys.argv) > #from petsc4py import PETSc > from mpi4py import MPI > mypath = '/home/XXX/study/mpi4py/' > comm = MPI.COMM_SELF.Spawn(sys.executable, > args=[mypath + 'cpi.py'], > maxprocs=4) > > N = numpy.array(100, 'i') > Adata = numpy.array(numpy.arange(10), dtype='f') > Adim = numpy.array(Adata.shape[0], dtype='i') > > > comm.Bcast([N, MPI.INT], root=MPI.ROOT) > comm.Bcast([Adim, MPI.INT], root=MPI.ROOT) > comm.Bcast([Adata, MPI.FLOAT], root=MPI.ROOT) > PI = numpy.array(0.0, 'd') > comm.Reduce(None, [PI, MPI.DOUBLE], > op=MPI.SUM, root=MPI.ROOT) > print(PI) > print(PI/numpy.pi - 1.0) > > comm.Disconnect() > > > #--------------------------------- > # cpi.py > # ------------------------------------ > > import numpy > import sys, petsc4py > #petsc4py.init(sys.argv) > #from petsc4py import PETSc > from mpi4py import MPI > > parent = MPI.Comm.Get_parent() > size = parent.Get_size() > rank = parent.Get_rank() > > print("proc {} of {} ".format(rank, size)) > N = numpy.array(0, dtype='i') > Adim = numpy.zeros(1, dtype='i') > > parent.Bcast([N, MPI.INT], root=0) > parent.Bcast([Adim, MPI.INT], root=0) > > Adata = numpy.zeros(Adim[0], dtype='f') > parent.Bcast([Adata, MPI.FLOAT],root=0) > print("proc {} of {}, Adim={}".format(rank, size, Adim)) > print("Adata = {}".format(Adata)) > h = 1.0 / N; s = 0.0 > for i in range(rank, N, size): > x = h * (i + 0.5) > #print(rank,size,Adim.shape) > s += 4.0 / (1.0 + x**2) > PI = numpy.array(s * h, dtype='d') > parent.Reduce([PI, MPI.DOUBLE], None, > op=MPI.SUM, root=0) > > parent.Disconnect() > > > kind regards, > Rodrigo > > > ________________________________ > > > This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. > From bsmith at mcs.anl.gov Wed Mar 1 13:26:34 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 1 Mar 2017 13:26:34 -0600 Subject: [petsc-users] How to define blocks for PCFIELDSPLIT? In-Reply-To: References: Message-ID: <683306C3-ED39-4B94-8150-ECB7AF8DC06C@mcs.anl.gov> > On Mar 1, 2017, at 12:59 PM, Lucas Clemente Vella wrote: > > I have a parallel AIJ matrix and I know exactly which element belongs to each one of the 4 submatrices blocks I want to use to solve the linear system. The blocks are not strided, because they have different number of elements. > > I understand that I must use PCFieldSplitSetIS(), since PCFieldSplitSetFields() is only for strided blocks. What I don't understand is how to create the IS structure I must pass to it. > > Each matrix coefficient is identified by a pair (i, j), but on IS creation functions, like ISCreateGeneral() and ISCreateBlock(), I am supposed to provide a one dimension set of indices. How does these indices relates to the matrix coefficients? PCFieldSplitSetIS() always indicates SQUARE blocks along the diagonal of the original matrix. Hence you need only one IS to define a block, you don't need one for the columns and one for the rows. The IS is telling what rows AND columns you want in the block. > Also, ISCreateGeneral() seems to create a single block, and ISCreateBlock() seems to create multiple blocks of the same size. ISCreateBlock() does not create multi blocks, it creates a single IS that has "block structure", for example 0,1, 3, 4, 6, 7, 9,10, .... > How to create multiple blocks with different sizes? ISCreateGeneral(). > > Thanks. > > -- > Lucas Clemente Vella > lvella at gmail.com From Rodrigo.Felicio at iongeo.com Wed Mar 1 14:31:31 2017 From: Rodrigo.Felicio at iongeo.com (Rodrigo Felicio) Date: Wed, 1 Mar 2017 20:31:31 +0000 Subject: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn In-Reply-To: <68967805-86F0-422B-A5EA-FFC251EBB9AE@mcs.anl.gov> References: <350529B93F4E2F4497FD8DE4E86E84AA16F1DC6F@AUS1EXMBX04.ioinc.ioroot.tld> <13D58838-9BD1-498D-8E7D-12FCFFE80957@mcs.anl.gov> <350529B93F4E2F4497FD8DE4E86E84AA16F1DCE8@AUS1EXMBX04.ioinc.ioroot.tld>, <68967805-86F0-422B-A5EA-FFC251EBB9AE@mcs.anl.gov> Message-ID: <350529B93F4E2F4497FD8DE4E86E84AA16F1DD69@AUS1EXMBX04.ioinc.ioroot.tld> I thought I had tried that as well with no success before, but this time it worked, despite some persistent error msgs related to PMI_finalize: time mpirun -n 1 python dyn_mem_ex.py proc 2 of 4 proc 3 of 4 proc 1 of 4 proc 0 of 4 proc 1 of 4, Adim=[10] proc 2 of 4, Adim=[10] proc 0 of 4, Adim=[10] proc 3 of 4, Adim=[10] Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] 3.14160098692 2.65258238441e-06 [cli_0]: write_line error; fd=12 buf=:cmd=finalize : system msg for write_line failure : Bad file descriptor Fatal error in MPI_Finalize: Other MPI error, error stack: MPI_Finalize(281).....: MPI_Finalize failed MPI_Finalize(209).....: MPID_Finalize(133)....: MPIDI_PG_Finalize(106): PMI_Finalize failed, error -1 real 0m0.586s user 0m0.536s sys 0m0.613s Best, Rodrigo ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. From Rodrigo.Felicio at iongeo.com Wed Mar 1 14:51:19 2017 From: Rodrigo.Felicio at iongeo.com (Rodrigo Felicio) Date: Wed, 1 Mar 2017 20:51:19 +0000 Subject: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn In-Reply-To: <350529B93F4E2F4497FD8DE4E86E84AA16F1DD69@AUS1EXMBX04.ioinc.ioroot.tld> References: <350529B93F4E2F4497FD8DE4E86E84AA16F1DC6F@AUS1EXMBX04.ioinc.ioroot.tld> <13D58838-9BD1-498D-8E7D-12FCFFE80957@mcs.anl.gov> <350529B93F4E2F4497FD8DE4E86E84AA16F1DCE8@AUS1EXMBX04.ioinc.ioroot.tld>, <68967805-86F0-422B-A5EA-FFC251EBB9AE@mcs.anl.gov>, <350529B93F4E2F4497FD8DE4E86E84AA16F1DD69@AUS1EXMBX04.ioinc.ioroot.tld> Message-ID: <350529B93F4E2F4497FD8DE4E86E84AA16F1DD7D@AUS1EXMBX04.ioinc.ioroot.tld> Sorry, I spoke too soon... Reversing the order between mpi4py and petsc4py imports does work *only* on the master code side, but not on the child process code side. In that case, the program hangs after the children processes are fired up and fails the same way as reported before... cheers Rodrigo ________________________________________ From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Rodrigo Felicio [Rodrigo.Felicio at iongeo.com] Sent: Wednesday, March 01, 2017 2:31 PM To: Barry Smith Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn I thought I had tried that as well with no success before, but this time it worked, despite some persistent error msgs related to PMI_finalize: time mpirun -n 1 python dyn_mem_ex.py proc 2 of 4 proc 3 of 4 proc 1 of 4 proc 0 of 4 proc 1 of 4, Adim=[10] proc 2 of 4, Adim=[10] proc 0 of 4, Adim=[10] proc 3 of 4, Adim=[10] Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] 3.14160098692 2.65258238441e-06 [cli_0]: write_line error; fd=12 buf=:cmd=finalize : system msg for write_line failure : Bad file descriptor Fatal error in MPI_Finalize: Other MPI error, error stack: MPI_Finalize(281).....: MPI_Finalize failed MPI_Finalize(209).....: MPID_Finalize(133)....: MPIDI_PG_Finalize(106): PMI_Finalize failed, error -1 real 0m0.586s user 0m0.536s sys 0m0.613s Best, Rodrigo ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. From knepley at gmail.com Thu Mar 2 08:10:48 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 2 Mar 2017 08:10:48 -0600 Subject: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn In-Reply-To: <350529B93F4E2F4497FD8DE4E86E84AA16F1DD7D@AUS1EXMBX04.ioinc.ioroot.tld> References: <350529B93F4E2F4497FD8DE4E86E84AA16F1DC6F@AUS1EXMBX04.ioinc.ioroot.tld> <13D58838-9BD1-498D-8E7D-12FCFFE80957@mcs.anl.gov> <350529B93F4E2F4497FD8DE4E86E84AA16F1DCE8@AUS1EXMBX04.ioinc.ioroot.tld> <68967805-86F0-422B-A5EA-FFC251EBB9AE@mcs.anl.gov> <350529B93F4E2F4497FD8DE4E86E84AA16F1DD69@AUS1EXMBX04.ioinc.ioroot.tld> <350529B93F4E2F4497FD8DE4E86E84AA16F1DD7D@AUS1EXMBX04.ioinc.ioroot.tld> Message-ID: On Wed, Mar 1, 2017 at 2:51 PM, Rodrigo Felicio wrote: > Sorry, I spoke too soon... > Reversing the order between mpi4py and petsc4py imports does work *only* > on the master code side, but not on the child process code side. In that > case, the program hangs after the children processes are fired up and fails > the same way as reported before... > Again, I have no idea what you mean here. I do not think you can separately run the two codes. How will the PMI manager know that these two separate processes should be in the same communicator (WORLD). It makes no sense to me. In MPI, you need to write the master and child in the same code, with a switch for the master rank. Matt > cheers > Rodrigo > ________________________________________ > From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] > on behalf of Rodrigo Felicio [Rodrigo.Felicio at iongeo.com] > Sent: Wednesday, March 01, 2017 2:31 PM > To: Barry Smith > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn > > I thought I had tried that as well with no success before, but this time > it worked, despite some persistent error msgs related to PMI_finalize: > > time mpirun -n 1 python dyn_mem_ex.py > proc 2 of 4 proc 3 of 4 > proc 1 of 4 > proc 0 of 4 > > proc 1 of 4, Adim=[10] > proc 2 of 4, Adim=[10] > proc 0 of 4, Adim=[10] > proc 3 of 4, Adim=[10] > Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] > Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]Adata = [ 0. 1. 2. 3. > 4. 5. 6. 7. 8. 9.] > > Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] > 3.14160098692 > 2.65258238441e-06 > [cli_0]: write_line error; fd=12 buf=:cmd=finalize > : > system msg for write_line failure : Bad file descriptor > Fatal error in MPI_Finalize: Other MPI error, error stack: > MPI_Finalize(281).....: MPI_Finalize failed > MPI_Finalize(209).....: > MPID_Finalize(133)....: > MPIDI_PG_Finalize(106): PMI_Finalize failed, error -1 > > real 0m0.586s > user 0m0.536s > sys 0m0.613s > > Best, > Rodrigo > > ________________________________ > > > This email and any files transmitted with it are confidential and are > intended solely for the use of the individual or entity to whom they are > addressed. If you are not the original recipient or the person responsible > for delivering the email to the intended recipient, be advised that you > have received this email in error, and that any use, dissemination, > forwarding, printing, or copying of this email is strictly prohibited. If > you received this email in error, please immediately notify the sender and > delete the original. > > > ________________________________ > > > This email and any files transmitted with it are confidential and are > intended solely for the use of the individual or entity to whom they are > addressed. If you are not the original recipient or the person responsible > for delivering the email to the intended recipient, be advised that you > have received this email in error, and that any use, dissemination, > forwarding, printing, or copying of this email is strictly prohibited. If > you received this email in error, please immediately notify the sender and > delete the original. > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From Rodrigo.Felicio at iongeo.com Thu Mar 2 09:09:51 2017 From: Rodrigo.Felicio at iongeo.com (Rodrigo Felicio) Date: Thu, 2 Mar 2017 15:09:51 +0000 Subject: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn In-Reply-To: References: <350529B93F4E2F4497FD8DE4E86E84AA16F1DC6F@AUS1EXMBX04.ioinc.ioroot.tld> <13D58838-9BD1-498D-8E7D-12FCFFE80957@mcs.anl.gov> <350529B93F4E2F4497FD8DE4E86E84AA16F1DCE8@AUS1EXMBX04.ioinc.ioroot.tld> <68967805-86F0-422B-A5EA-FFC251EBB9AE@mcs.anl.gov> <350529B93F4E2F4497FD8DE4E86E84AA16F1DD69@AUS1EXMBX04.ioinc.ioroot.tld> <350529B93F4E2F4497FD8DE4E86E84AA16F1DD7D@AUS1EXMBX04.ioinc.ioroot.tld> Message-ID: <350529B93F4E2F4497FD8DE4E86E84AA16F1DDF5@AUS1EXMBX04.ioinc.ioroot.tld> Thanks, Matt, I see your point. My problem is that I need to have many ?masters? and each of these needs to have their own and distinct group of ?children? processes to run some linear algebra in parallel. Being new to MPI, I thought that the Spawn approach would help with that, because I already had a PETSc program that could solve the associated Least-squares problem. I thought that I only needed to adapt that code so that instead of calling it at the prompt, I could integrate it to a MPI code using MPI.SPAWN function. Since for some reason the MPI.SPAWN is not working correctly for me I am now seeking to solve my problem by splitting the COMM_WORLD instead, which I believe is more in line with your suggestion. Kind regards Rodrigo From: Matthew Knepley [mailto:knepley at gmail.com] Sent: Thursday, March 02, 2017 8:11 AM To: Rodrigo Felicio Cc: Barry Smith; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn On Wed, Mar 1, 2017 at 2:51 PM, Rodrigo Felicio > wrote: Sorry, I spoke too soon... Reversing the order between mpi4py and petsc4py imports does work *only* on the master code side, but not on the child process code side. In that case, the program hangs after the children processes are fired up and fails the same way as reported before... Again, I have no idea what you mean here. I do not think you can separately run the two codes. How will the PMI manager know that these two separate processes should be in the same communicator (WORLD). It makes no sense to me. In MPI, you need to write the master and child in the same code, with a switch for the master rank. Matt cheers Rodrigo ________________________________________ From: petsc-users-bounces at mcs.anl.gov [petsc-users-bounces at mcs.anl.gov] on behalf of Rodrigo Felicio [Rodrigo.Felicio at iongeo.com] Sent: Wednesday, March 01, 2017 2:31 PM To: Barry Smith Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] petsc4py and MPI.COMM_SELF.Spawn I thought I had tried that as well with no success before, but this time it worked, despite some persistent error msgs related to PMI_finalize: time mpirun -n 1 python dyn_mem_ex.py proc 2 of 4 proc 3 of 4 proc 1 of 4 proc 0 of 4 proc 1 of 4, Adim=[10] proc 2 of 4, Adim=[10] proc 0 of 4, Adim=[10] proc 3 of 4, Adim=[10] Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] Adata = [ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] 3.14160098692 2.65258238441e-06 [cli_0]: write_line error; fd=12 buf=:cmd=finalize : system msg for write_line failure : Bad file descriptor Fatal error in MPI_Finalize: Other MPI error, error stack: MPI_Finalize(281).....: MPI_Finalize failed MPI_Finalize(209).....: MPID_Finalize(133)....: MPIDI_PG_Finalize(106): PMI_Finalize failed, error -1 real 0m0.586s user 0m0.536s sys 0m0.613s Best, Rodrigo ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. -------------- next part -------------- An HTML attachment was scrubbed... URL: From imilian.hartig at gmail.com Fri Mar 3 11:37:28 2017 From: imilian.hartig at gmail.com (Maximilian Hartig) Date: Fri, 3 Mar 2017 18:37:28 +0100 Subject: [petsc-users] Problems imposing boundary conditions Message-ID: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> Hello, I am working on a transient structural FEM code with PETSc. I managed to create a slow but functioning program with the use of petscFE and a TS solver. The code runs fine until I try to restrict movement in all three spatial directions for one face. I then get the error which is attached below. So apparently DMPlexMatSetClosure tries to write/read beyond what was priorly allocated. I do however not call MatSeqAIJSetPreallocation myself in the code. So I?m unsure where to start looking for the bug. In my understanding, PETSc should know from the DM how much space to allocate. Could you kindly give me a hint? Thanks, Max 0 SNES Function norm 2.508668036663e-06 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Argument out of range [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn off this check [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc GIT Date: 2017-02-28 13:41:43 -0600 [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by hartig Fri Mar 3 17:55:57 2017 [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort --download-ml [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in /home/hartig/petsc/src/mat/impls/aij/seq/aij.c [0]PETSC ERROR: #2 MatSetValues() line 1270 in /home/hartig/petsc/src/mat/interface/matrix.c [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure [0]mat for sieve point 60 [0]mat row indices[0] = 41754 [0]mat row indices[1] = 41755 [0]mat row indices[2] = 41756 [0]mat row indices[3] = 41760 [0]mat row indices[4] = 41761 [0]mat row indices[5] = 41762 [0]mat row indices[6] = 41766 [0]mat row indices[7] = -41768 [0]mat row indices[8] = 41767 [0]mat row indices[9] = 41771 [0]mat row indices[10] = -41773 [0]mat row indices[11] = 41772 [0]mat row indices[12] = 41776 [0]mat row indices[13] = 41777 [0]mat row indices[14] = 41778 [0]mat row indices[15] = 41782 [0]mat row indices[16] = -41784 [0]mat row indices[17] = 41783 [0]mat row indices[18] = 261 [0]mat row indices[19] = -263 [0]mat row indices[20] = 262 [0]mat row indices[21] = 24318 [0]mat row indices[22] = 24319 [0]mat row indices[23] = 24320 [0]mat row indices[24] = -7 [0]mat row indices[25] = -8 [0]mat row indices[26] = 6 [0]mat row indices[27] = 1630 [0]mat row indices[28] = -1632 [0]mat row indices[29] = 1631 [0]mat row indices[30] = 41757 [0]mat row indices[31] = 41758 [0]mat row indices[32] = 41759 [0]mat row indices[33] = 41763 [0]mat row indices[34] = 41764 [0]mat row indices[35] = 41765 [0]mat row indices[36] = 41768 [0]mat row indices[37] = 41769 [0]mat row indices[38] = 41770 [0]mat row indices[39] = 41773 [0]mat row indices[40] = 41774 [0]mat row indices[41] = 41775 [0]mat row indices[42] = 41779 [0]mat row indices[43] = 41780 [0]mat row indices[44] = 41781 [0]mat row indices[45] = 41784 [0]mat row indices[46] = 41785 [0]mat row indices[47] = 41786 [0]mat row indices[48] = 263 [0]mat row indices[49] = 264 [0]mat row indices[50] = 265 [0]mat row indices[51] = 24321 [0]mat row indices[52] = 24322 [0]mat row indices[53] = 24323 [0]mat row indices[54] = 5 [0]mat row indices[55] = 6 [0]mat row indices[56] = 7 [0]mat row indices[57] = 1632 [0]mat row indices[58] = 1633 [0]mat row indices[59] = 1634 [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 -0.0239537 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 0.038569 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 -0.045728 -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 0.0141688 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in /home/hartig/petsc/src/dm/impls/plex/plex.c [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in /home/hartig/petsc/src/snes/utils/dmplexsnes.c [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in /home/hartig/petsc/src/ts/utils/dmplexts.c [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in /home/hartig/petsc/src/ts/utils/dmlocalts.c [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in /home/hartig/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in /home/hartig/petsc/src/ts/interface/ts.c [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in /home/hartig/petsc/src/snes/interface/snes.c [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in /home/hartig/petsc/src/snes/impls/ls/ls.c [0]PETSC ERROR: #12 SNESSolve() line 3967 in /home/hartig/petsc/src/snes/interface/snes.c [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c [0]PETSC ERROR: #14 TSStep_Theta() line 211 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c [0]PETSC ERROR: #15 TSStep() line 3809 in /home/hartig/petsc/src/ts/interface/ts.c From lukas.drinkt.thee at gmail.com Fri Mar 3 11:43:15 2017 From: lukas.drinkt.thee at gmail.com (Lukas van de Wiel) Date: Fri, 3 Mar 2017 18:43:15 +0100 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> Message-ID: You have apparently preallocated the non-zeroes of you matrix, and the room was insufficient to accommodate all your equations. What happened after you tried: MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) Cheers Lukas On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig wrote: > Hello, > > I am working on a transient structural FEM code with PETSc. I managed to > create a slow but functioning program with the use of petscFE and a TS > solver. The code runs fine until I try to restrict movement in all three > spatial directions for one face. I then get the error which is attached > below. > So apparently DMPlexMatSetClosure tries to write/read beyond what was > priorly allocated. I do however not call MatSeqAIJSetPreallocation myself > in the code. So I?m unsure where to start looking for the bug. In my > understanding, PETSc should know from the DM how much space to allocate. > Could you kindly give me a hint? > > Thanks, > > Max > > 0 SNES Function norm 2.508668036663e-06 > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Argument out of range > [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc > Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn > off this check > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc GIT > Date: 2017-02-28 13:41:43 -0600 > [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by hartig Fri > Mar 3 17:55:57 2017 > [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel > --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc > --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc > --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort > --download-ml > [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in > /home/hartig/petsc/src/mat/impls/aij/seq/aij.c > [0]PETSC ERROR: #2 MatSetValues() line 1270 in /home/hartig/petsc/src/mat/ > interface/matrix.c > [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure > [0]mat for sieve point 60 > [0]mat row indices[0] = 41754 > [0]mat row indices[1] = 41755 > [0]mat row indices[2] = 41756 > [0]mat row indices[3] = 41760 > [0]mat row indices[4] = 41761 > [0]mat row indices[5] = 41762 > [0]mat row indices[6] = 41766 > [0]mat row indices[7] = -41768 > [0]mat row indices[8] = 41767 > [0]mat row indices[9] = 41771 > [0]mat row indices[10] = -41773 > [0]mat row indices[11] = 41772 > [0]mat row indices[12] = 41776 > [0]mat row indices[13] = 41777 > [0]mat row indices[14] = 41778 > [0]mat row indices[15] = 41782 > [0]mat row indices[16] = -41784 > [0]mat row indices[17] = 41783 > [0]mat row indices[18] = 261 > [0]mat row indices[19] = -263 > [0]mat row indices[20] = 262 > [0]mat row indices[21] = 24318 > [0]mat row indices[22] = 24319 > [0]mat row indices[23] = 24320 > [0]mat row indices[24] = -7 > [0]mat row indices[25] = -8 > [0]mat row indices[26] = 6 > [0]mat row indices[27] = 1630 > [0]mat row indices[28] = -1632 > [0]mat row indices[29] = 1631 > [0]mat row indices[30] = 41757 > [0]mat row indices[31] = 41758 > [0]mat row indices[32] = 41759 > [0]mat row indices[33] = 41763 > [0]mat row indices[34] = 41764 > [0]mat row indices[35] = 41765 > [0]mat row indices[36] = 41768 > [0]mat row indices[37] = 41769 > [0]mat row indices[38] = 41770 > [0]mat row indices[39] = 41773 > [0]mat row indices[40] = 41774 > [0]mat row indices[41] = 41775 > [0]mat row indices[42] = 41779 > [0]mat row indices[43] = 41780 > [0]mat row indices[44] = 41781 > [0]mat row indices[45] = 41784 > [0]mat row indices[46] = 41785 > [0]mat row indices[47] = 41786 > [0]mat row indices[48] = 263 > [0]mat row indices[49] = 264 > [0]mat row indices[50] = 265 > [0]mat row indices[51] = 24321 > [0]mat row indices[52] = 24322 > [0]mat row indices[53] = 24323 > [0]mat row indices[54] = 5 > [0]mat row indices[55] = 6 > [0]mat row indices[56] = 7 > [0]mat row indices[57] = 1632 > [0]mat row indices[58] = 1633 > [0]mat row indices[59] = 1634 > [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 > 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 > 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 > -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 > -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. > 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. > -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. > [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 -0.0965489 > 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 -0.657189 > 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 0.0408591 -0.06 > -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 -0.0141372 0.0855084 > -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. > 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. > -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. > [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 -0.165135 > -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 0.0418706 > -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 -0.0845583 0. > 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 -0.0412838 > -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. > 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. > -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 > [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 > -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 > -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 > -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 > 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 > 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 > 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. > [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 -0.205724 > 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 -0.592598 > -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 -0.0115868 > -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 -0.041431 > 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. > 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. > -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. > [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 > 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 > -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 > -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 > 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. > 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. > -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 > [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 > 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 > 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 > 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 > -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. > 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. > -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. > [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 0.626591 > 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 -0.829208 > 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 0.0179272 0.04 > 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 0.0644318 0.17325 > -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. > 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. > -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. > [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 3.12405e-14 > -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 9.55728e-18 > -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 0.0250605 0. > 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 -1.47451e-17 > -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 > 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. > 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 > [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 > -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 > 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 > -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 > 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. > 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. > -1.56483e-11 0. 0. -1.04322e-11 0. 0. > [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 0.0332522 > -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 0.149643 > 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 0.0132913 -0.02 > 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 -0.14202 -0.432665 > 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. > 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. > -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. > [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 > -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 > 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 > -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 > 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. > 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. > -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 > [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 > -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 > -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. > 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. > 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. > 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. > -1.04322e-11 0. 0. > [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 > -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 > 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 > 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. > 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. > 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. > -1.56483e-11 0. 0. -1.04322e-11 0. > [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 4.70408e-17 > 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 -0.11009 > -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. -0.00807774 > 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 0. 0. > 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. > 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. > -1.04322e-11 > [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 -2.39586 > -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 -0.0347447 > -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 -0.0344031 > 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 -0.0437857 > -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. > 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. > -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. > [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 -0.148706 > -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 0.270936 > -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 -0.00239755 > -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 -0.0601388 > -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 > 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. > 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. > [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 0.275225 > 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 -0.0786276 > 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 0.0566238 0. > 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 -0.123851 -0.0012784 > -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. > 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. > -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 > [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 0.283843 > 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 -0.0275225 > 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 0.00698363 0.03 > 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 -0.0275225 > -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. > -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. > 2.60805e-12 0. 0. 2.60805e-12 0. 0. > [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 > 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 > 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 > -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 > 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. > -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. > 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. > [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 > 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 > -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 > 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 > -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. > 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. > 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 > [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. 0.00657075 > -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 0. > 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. 0. > -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. > -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. > 2.60805e-12 0. 0. > [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 0.0340524 > 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 0.00816465 > 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 0.00960973 > 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. > -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. > 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. > [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 > 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 > 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 > 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. > -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. > 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 > [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 0.288268 > 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 0.043118 > 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 0.00688063 > 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 0.0275225 > -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. > -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. > 1.56483e-11 0. 0. 2.60805e-12 0. 0. > [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 -0.0239537 > 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 0.038569 > 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 -0.045728 > -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 0.0141688 > 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. > -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. > 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. > [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 > -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 > 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 > -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 > 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. > -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. > 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 > [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 0.601523 > 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 0.175568 0. > -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 -0.000412882 > -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. -1.56483e-11 0. > 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. > 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. > 1.56483e-11 0. 0. > [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 > 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 > -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 > -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 > 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. > -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. > 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. > [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 > -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. > -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 > 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 > 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 > 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 > 0. 0. 2.60805e-12 0. 0. 1.56483e-11 > [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 > 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 > 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. > 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. > -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. > [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. > -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. > 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. > 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. > 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. > -9.96708e-10 0. > [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. > -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. > 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. > 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. > 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. > -9.96708e-10 > [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 > 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 > 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. > 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. > -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. > [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. > -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. > 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. > 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. > 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. > -9.96708e-10 0. > [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. > -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. > 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. > 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. > 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. > -9.96708e-10 > [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 > 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 > 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. > 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. > -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. > [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. > -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. > 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. > 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. > 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. > -9.96708e-10 0. > [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. > -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. > 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. > 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. > 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. > -9.96708e-10 > [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 > 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 > 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. > 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. > -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. > [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. > -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. > 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. > 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. > 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. > -6.64472e-10 0. > [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. > -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. > 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. > 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. > 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. > -6.64472e-10 > [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 > 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 > 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. > 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. > -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. > [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. > -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. > 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. > 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. > 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. > -6.64472e-10 0. > [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. > -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. > 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. > 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. > 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. > -6.64472e-10 > [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 > 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 > 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. > 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. > -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. > [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. > -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. > 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. > 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. > 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. > -6.64472e-10 0. > [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. > -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. > 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. > 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. > 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. > -6.64472e-10 > [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. > 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. > 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. > 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. > 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. > [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 > 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 > 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 > 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 > 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. > [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. > 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. > -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. > -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. > -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. > 1.66118e-10 > [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. > 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. > 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. > 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. > 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. > [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 > 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 > 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 > 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 > 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. > [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. > 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. > -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. > -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. > -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. > 1.66118e-10 > [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. > 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. > 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. > 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. > 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. > [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 > 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 > 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 > 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 > 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. > [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. > 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. > -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. > -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. > -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. > 1.66118e-10 > [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. > 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. > 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. > 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. > 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. > [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 > 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 > 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 > 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 > 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. > [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. > 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. > -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. > -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. > -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. > 9.96708e-10 > [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in > /home/hartig/petsc/src/dm/impls/plex/plex.c > [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in > /home/hartig/petsc/src/snes/utils/dmplexsnes.c > [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in > /home/hartig/petsc/src/ts/utils/dmplexts.c > [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in > /home/hartig/petsc/src/ts/utils/dmlocalts.c > [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in > /home/hartig/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in > /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c > [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in > /home/hartig/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in > /home/hartig/petsc/src/snes/interface/snes.c > [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in > /home/hartig/petsc/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #12 SNESSolve() line 3967 in /home/hartig/petsc/src/snes/ > interface/snes.c > [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in /home/hartig/petsc/src/ts/ > impls/implicit/theta/theta.c > [0]PETSC ERROR: #14 TSStep_Theta() line 211 in /home/hartig/petsc/src/ts/ > impls/implicit/theta/theta.c > [0]PETSC ERROR: #15 TSStep() line 3809 in /home/hartig/petsc/src/ts/ > interface/ts.c > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sander.Arens at ugent.be Fri Mar 3 11:56:19 2017 From: Sander.Arens at ugent.be (Sander Arens) Date: Fri, 3 Mar 2017 18:56:19 +0100 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> Message-ID: Max, I'm assuming you use DMPlex for your mesh? If so, did you only specify the faces in the DMLabel (and not vertices or edges). Do you get this error only in parallel? If so, I can confirm this bug. I submitted a pull request for this yesterday. On 3 March 2017 at 18:43, Lukas van de Wiel wrote: > You have apparently preallocated the non-zeroes of you matrix, and the > room was insufficient to accommodate all your equations. > > What happened after you tried: > > MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) > > > Cheers > Lukas > > On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig < > imilian.hartig at gmail.com> wrote: > >> Hello, >> >> I am working on a transient structural FEM code with PETSc. I managed to >> create a slow but functioning program with the use of petscFE and a TS >> solver. The code runs fine until I try to restrict movement in all three >> spatial directions for one face. I then get the error which is attached >> below. >> So apparently DMPlexMatSetClosure tries to write/read beyond what was >> priorly allocated. I do however not call MatSeqAIJSetPreallocation myself >> in the code. So I?m unsure where to start looking for the bug. In my >> understanding, PETSc should know from the DM how much space to allocate. >> Could you kindly give me a hint? >> >> Thanks, >> >> Max >> >> 0 SNES Function norm 2.508668036663e-06 >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Argument out of range >> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn >> off this check >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc GIT >> Date: 2017-02-28 13:41:43 -0600 >> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by hartig >> Fri Mar 3 17:55:57 2017 >> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel >> --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc >> --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc >> --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort >> --download-ml >> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in >> /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >> [0]PETSC ERROR: #2 MatSetValues() line 1270 in >> /home/hartig/petsc/src/mat/interface/matrix.c >> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >> [0]mat for sieve point 60 >> [0]mat row indices[0] = 41754 >> [0]mat row indices[1] = 41755 >> [0]mat row indices[2] = 41756 >> [0]mat row indices[3] = 41760 >> [0]mat row indices[4] = 41761 >> [0]mat row indices[5] = 41762 >> [0]mat row indices[6] = 41766 >> [0]mat row indices[7] = -41768 >> [0]mat row indices[8] = 41767 >> [0]mat row indices[9] = 41771 >> [0]mat row indices[10] = -41773 >> [0]mat row indices[11] = 41772 >> [0]mat row indices[12] = 41776 >> [0]mat row indices[13] = 41777 >> [0]mat row indices[14] = 41778 >> [0]mat row indices[15] = 41782 >> [0]mat row indices[16] = -41784 >> [0]mat row indices[17] = 41783 >> [0]mat row indices[18] = 261 >> [0]mat row indices[19] = -263 >> [0]mat row indices[20] = 262 >> [0]mat row indices[21] = 24318 >> [0]mat row indices[22] = 24319 >> [0]mat row indices[23] = 24320 >> [0]mat row indices[24] = -7 >> [0]mat row indices[25] = -8 >> [0]mat row indices[26] = 6 >> [0]mat row indices[27] = 1630 >> [0]mat row indices[28] = -1632 >> [0]mat row indices[29] = 1631 >> [0]mat row indices[30] = 41757 >> [0]mat row indices[31] = 41758 >> [0]mat row indices[32] = 41759 >> [0]mat row indices[33] = 41763 >> [0]mat row indices[34] = 41764 >> [0]mat row indices[35] = 41765 >> [0]mat row indices[36] = 41768 >> [0]mat row indices[37] = 41769 >> [0]mat row indices[38] = 41770 >> [0]mat row indices[39] = 41773 >> [0]mat row indices[40] = 41774 >> [0]mat row indices[41] = 41775 >> [0]mat row indices[42] = 41779 >> [0]mat row indices[43] = 41780 >> [0]mat row indices[44] = 41781 >> [0]mat row indices[45] = 41784 >> [0]mat row indices[46] = 41785 >> [0]mat row indices[47] = 41786 >> [0]mat row indices[48] = 263 >> [0]mat row indices[49] = 264 >> [0]mat row indices[50] = 265 >> [0]mat row indices[51] = 24321 >> [0]mat row indices[52] = 24322 >> [0]mat row indices[53] = 24323 >> [0]mat row indices[54] = 5 >> [0]mat row indices[55] = 6 >> [0]mat row indices[56] = 7 >> [0]mat row indices[57] = 1632 >> [0]mat row indices[58] = 1633 >> [0]mat row indices[59] = 1634 >> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 >> 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 >> 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 >> -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 >> -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. >> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 -0.0965489 >> 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 -0.657189 >> 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 0.0408591 -0.06 >> -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 -0.0141372 0.0855084 >> -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. >> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 -0.165135 >> -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 0.0418706 >> -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 -0.0845583 0. >> 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 -0.0412838 >> -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 >> -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 >> -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 >> -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 >> 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 >> 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 >> -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 >> -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 >> -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 >> -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 >> 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 >> -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 >> -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 >> 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 >> 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 >> 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 >> 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 >> -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 0.626591 >> 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 -0.829208 >> 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 0.0179272 0.04 >> 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 0.0644318 0.17325 >> -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 3.12405e-14 >> -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 9.55728e-18 >> -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 0.0250605 0. >> 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 -1.47451e-17 >> -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 >> 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. >> 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 >> -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 >> 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 >> -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 >> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >> 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >> -1.56483e-11 0. 0. -1.04322e-11 0. 0. >> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 0.0332522 >> -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 0.149643 >> 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 0.0132913 -0.02 >> 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 -0.14202 -0.432665 >> 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 >> -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 >> 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 >> -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 >> 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 >> -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 >> -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. >> 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. >> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >> -1.04322e-11 0. 0. >> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 >> -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 >> 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 >> 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. >> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >> 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >> -1.56483e-11 0. 0. -1.04322e-11 0. >> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 4.70408e-17 >> 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 -0.11009 >> -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. -0.00807774 >> 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 0. 0. >> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >> -1.04322e-11 >> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 -2.39586 >> -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 -0.0347447 >> -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 -0.0344031 >> 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 -0.0437857 >> -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. >> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 >> -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 >> 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 >> -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 >> -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 0.275225 >> 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 -0.0786276 >> 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 0.0566238 0. >> 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 -0.123851 -0.0012784 >> -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. >> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 >> 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 >> -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 >> 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 >> -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. >> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 >> 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 >> 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 >> -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 >> 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >> 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 >> 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 >> -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 >> 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 >> -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >> 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >> 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. 0.00657075 >> -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 0. >> 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. 0. >> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >> -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >> 2.60805e-12 0. 0. >> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 0.0340524 >> 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 0.00816465 >> 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 0.00960973 >> 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 >> 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 >> 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 >> 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 >> 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 >> 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 >> 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 >> 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 -0.0239537 >> 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 0.038569 >> 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 -0.045728 >> -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 0.0141688 >> 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 >> -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 >> 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 >> -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 >> 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 0.601523 >> 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 0.175568 0. >> -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 -0.000412882 >> -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. -1.56483e-11 0. >> 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >> 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >> 1.56483e-11 0. 0. >> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 >> 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 >> -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 >> -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 >> 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 >> -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. >> -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 >> 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 >> 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >> 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >> 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 >> 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 >> 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. >> 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >> -9.96708e-10 0. >> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >> -9.96708e-10 >> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 >> 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 >> 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. >> 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >> -9.96708e-10 0. >> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >> -9.96708e-10 >> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 >> 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 >> 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. >> 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >> -9.96708e-10 0. >> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >> -9.96708e-10 >> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 >> 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 >> 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. >> 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >> -6.64472e-10 0. >> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >> -6.64472e-10 >> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 >> 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 >> 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. >> 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >> -6.64472e-10 0. >> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >> -6.64472e-10 >> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 >> 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 >> 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. >> 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >> -6.64472e-10 0. >> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >> -6.64472e-10 >> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. >> 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. >> 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >> 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. >> 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 >> 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 >> 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >> 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >> 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >> 1.66118e-10 >> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. >> 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. >> 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. >> 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >> 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 >> 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 >> 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >> 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >> 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >> 1.66118e-10 >> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. >> 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. >> 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >> 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >> 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 >> 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 >> 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >> 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >> 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. >> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >> 1.66118e-10 >> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. >> 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. >> 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. >> 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. >> 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 >> 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 >> 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >> 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >> 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. >> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >> 9.96708e-10 >> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in >> /home/hartig/petsc/src/dm/impls/plex/plex.c >> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in >> /home/hartig/petsc/src/snes/utils/dmplexsnes.c >> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in >> /home/hartig/petsc/src/ts/utils/dmplexts.c >> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in >> /home/hartig/petsc/src/ts/utils/dmlocalts.c >> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in >> /home/hartig/petsc/src/ts/interface/ts.c >> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in >> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in >> /home/hartig/petsc/src/ts/interface/ts.c >> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in >> /home/hartig/petsc/src/snes/interface/snes.c >> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in >> /home/hartig/petsc/src/snes/impls/ls/ls.c >> [0]PETSC ERROR: #12 SNESSolve() line 3967 in >> /home/hartig/petsc/src/snes/interface/snes.c >> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in >> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in >> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >> [0]PETSC ERROR: #15 TSStep() line 3809 in /home/hartig/petsc/src/ts/inte >> rface/ts.c >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Mar 3 15:14:07 2017 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 3 Mar 2017 16:14:07 -0500 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> Message-ID: On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens wrote: > Max, > > I'm assuming you use DMPlex for your mesh? If so, did you only specify the > faces in the DMLabel (and not vertices or edges). Do you get this error > only in parallel? > > If so, I can confirm this bug. I submitted a pull request for this > yesterday. > Yep, I saw Sander's pull request. I will get in merged in tomorrow when I get home to Houston. Thanks, Matt > On 3 March 2017 at 18:43, Lukas van de Wiel > wrote: > >> You have apparently preallocated the non-zeroes of you matrix, and the >> room was insufficient to accommodate all your equations. >> >> What happened after you tried: >> >> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >> >> >> Cheers >> Lukas >> >> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig < >> imilian.hartig at gmail.com> wrote: >> >>> Hello, >>> >>> I am working on a transient structural FEM code with PETSc. I managed to >>> create a slow but functioning program with the use of petscFE and a TS >>> solver. The code runs fine until I try to restrict movement in all three >>> spatial directions for one face. I then get the error which is attached >>> below. >>> So apparently DMPlexMatSetClosure tries to write/read beyond what was >>> priorly allocated. I do however not call MatSeqAIJSetPreallocation myself >>> in the code. So I?m unsure where to start looking for the bug. In my >>> understanding, PETSc should know from the DM how much space to allocate. >>> Could you kindly give me a hint? >>> >>> Thanks, >>> >>> Max >>> >>> 0 SNES Function norm 2.508668036663e-06 >>> [0]PETSC ERROR: --------------------- Error Message >>> -------------------------------------------------------------- >>> [0]PETSC ERROR: Argument out of range >>> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to >>> turn off this check >>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>> for trouble shooting. >>> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc >>> GIT Date: 2017-02-28 13:41:43 -0600 >>> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by hartig >>> Fri Mar 3 17:55:57 2017 >>> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel >>> --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc >>> --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc >>> --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort >>> --download-ml >>> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in >>> /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >>> [0]PETSC ERROR: #2 MatSetValues() line 1270 in >>> /home/hartig/petsc/src/mat/interface/matrix.c >>> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >>> [0]mat for sieve point 60 >>> [0]mat row indices[0] = 41754 >>> [0]mat row indices[1] = 41755 >>> [0]mat row indices[2] = 41756 >>> [0]mat row indices[3] = 41760 >>> [0]mat row indices[4] = 41761 >>> [0]mat row indices[5] = 41762 >>> [0]mat row indices[6] = 41766 >>> [0]mat row indices[7] = -41768 >>> [0]mat row indices[8] = 41767 >>> [0]mat row indices[9] = 41771 >>> [0]mat row indices[10] = -41773 >>> [0]mat row indices[11] = 41772 >>> [0]mat row indices[12] = 41776 >>> [0]mat row indices[13] = 41777 >>> [0]mat row indices[14] = 41778 >>> [0]mat row indices[15] = 41782 >>> [0]mat row indices[16] = -41784 >>> [0]mat row indices[17] = 41783 >>> [0]mat row indices[18] = 261 >>> [0]mat row indices[19] = -263 >>> [0]mat row indices[20] = 262 >>> [0]mat row indices[21] = 24318 >>> [0]mat row indices[22] = 24319 >>> [0]mat row indices[23] = 24320 >>> [0]mat row indices[24] = -7 >>> [0]mat row indices[25] = -8 >>> [0]mat row indices[26] = 6 >>> [0]mat row indices[27] = 1630 >>> [0]mat row indices[28] = -1632 >>> [0]mat row indices[29] = 1631 >>> [0]mat row indices[30] = 41757 >>> [0]mat row indices[31] = 41758 >>> [0]mat row indices[32] = 41759 >>> [0]mat row indices[33] = 41763 >>> [0]mat row indices[34] = 41764 >>> [0]mat row indices[35] = 41765 >>> [0]mat row indices[36] = 41768 >>> [0]mat row indices[37] = 41769 >>> [0]mat row indices[38] = 41770 >>> [0]mat row indices[39] = 41773 >>> [0]mat row indices[40] = 41774 >>> [0]mat row indices[41] = 41775 >>> [0]mat row indices[42] = 41779 >>> [0]mat row indices[43] = 41780 >>> [0]mat row indices[44] = 41781 >>> [0]mat row indices[45] = 41784 >>> [0]mat row indices[46] = 41785 >>> [0]mat row indices[47] = 41786 >>> [0]mat row indices[48] = 263 >>> [0]mat row indices[49] = 264 >>> [0]mat row indices[50] = 265 >>> [0]mat row indices[51] = 24321 >>> [0]mat row indices[52] = 24322 >>> [0]mat row indices[53] = 24323 >>> [0]mat row indices[54] = 5 >>> [0]mat row indices[55] = 6 >>> [0]mat row indices[56] = 7 >>> [0]mat row indices[57] = 1632 >>> [0]mat row indices[58] = 1633 >>> [0]mat row indices[59] = 1634 >>> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 >>> 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 >>> 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 >>> -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 >>> -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. >>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 >>> -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 >>> -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 >>> 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 >>> -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 -0.165135 >>> -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 0.0418706 >>> -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 -0.0845583 0. >>> 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 -0.0412838 >>> -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >>> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 >>> -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 >>> -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 >>> -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 >>> 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 >>> 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 >>> -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 >>> -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 >>> -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 >>> -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 >>> 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 >>> -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 >>> -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 >>> 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 >>> 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 >>> 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 >>> 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 >>> -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 >>> 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 >>> -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 >>> 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 >>> 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 3.12405e-14 >>> -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 9.55728e-18 >>> -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 0.0250605 0. >>> 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 -1.47451e-17 >>> -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 >>> 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. >>> 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 >>> -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 >>> 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 >>> -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 >>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 0.0332522 >>> -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 0.149643 >>> 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 0.0132913 -0.02 >>> 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 -0.14202 -0.432665 >>> 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 >>> -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 >>> 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 >>> -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 >>> 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 >>> -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 >>> -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. >>> 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. >>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>> -1.04322e-11 0. 0. >>> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 >>> -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 >>> 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 >>> 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. >>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>> -1.56483e-11 0. 0. -1.04322e-11 0. >>> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 >>> 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 >>> -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. >>> -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 >>> 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. >>> 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>> 0. -1.04322e-11 >>> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 -2.39586 >>> -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 -0.0347447 >>> -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 -0.0344031 >>> 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 -0.0437857 >>> -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. >>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 >>> -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 >>> 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 >>> -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 >>> -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 >>> 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 >>> -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 >>> 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 >>> -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >>> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 >>> 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 >>> -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 >>> 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 >>> -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. >>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 >>> 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 >>> 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 >>> -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 >>> 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>> 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 >>> 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 >>> -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 >>> 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 >>> -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>> 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>> 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. >>> 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 >>> 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. >>> 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>> 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>> 2.60805e-12 0. 0. >>> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 0.0340524 >>> 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 0.00816465 >>> 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 0.00960973 >>> 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 >>> 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 >>> 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 >>> 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 >>> 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 >>> 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 >>> 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 >>> 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 -0.0239537 >>> 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 0.038569 >>> 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 -0.045728 >>> -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 0.0141688 >>> 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >>> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 >>> -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 >>> 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 >>> -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 >>> 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >>> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 0.601523 >>> 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 0.175568 0. >>> -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 -0.000412882 >>> -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. -1.56483e-11 0. >>> 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>> 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>> 1.56483e-11 0. 0. >>> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 >>> 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 >>> -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 >>> -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 >>> 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >>> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 >>> -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. >>> -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 >>> 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 >>> 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>> 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>> 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >>> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>> -9.96708e-10 0. 0. >>> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>> -9.96708e-10 0. >>> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>> -9.96708e-10 >>> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>> -9.96708e-10 0. 0. >>> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>> -9.96708e-10 0. >>> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>> -9.96708e-10 >>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>> -9.96708e-10 0. 0. >>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>> -9.96708e-10 0. >>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>> -9.96708e-10 >>> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>> -6.64472e-10 0. 0. >>> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>> -6.64472e-10 0. >>> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>> -6.64472e-10 >>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>> -6.64472e-10 0. 0. >>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>> -6.64472e-10 0. >>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>> -6.64472e-10 >>> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>> -6.64472e-10 0. 0. >>> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>> -6.64472e-10 0. >>> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>> -6.64472e-10 >>> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. >>> 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. >>> 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >>> 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. >>> 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 >>> 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 >>> 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>> 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >>> 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>> 1.66118e-10 >>> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. >>> 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. >>> 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. >>> 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >>> 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 >>> 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 >>> 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >>> 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>> 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>> 1.66118e-10 >>> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. >>> 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. >>> 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >>> 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >>> 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 >>> 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 >>> 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>> 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>> 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. >>> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>> 1.66118e-10 >>> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. >>> 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. >>> 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. >>> 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. >>> 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 >>> 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 >>> 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >>> 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >>> 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. >>> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>> 9.96708e-10 >>> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in >>> /home/hartig/petsc/src/dm/impls/plex/plex.c >>> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in >>> /home/hartig/petsc/src/snes/utils/dmplexsnes.c >>> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in >>> /home/hartig/petsc/src/ts/utils/dmplexts.c >>> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in >>> /home/hartig/petsc/src/ts/utils/dmlocalts.c >>> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in >>> /home/hartig/petsc/src/ts/interface/ts.c >>> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in >>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in >>> /home/hartig/petsc/src/ts/interface/ts.c >>> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in >>> /home/hartig/petsc/src/snes/interface/snes.c >>> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in >>> /home/hartig/petsc/src/snes/impls/ls/ls.c >>> [0]PETSC ERROR: #12 SNESSolve() line 3967 in >>> /home/hartig/petsc/src/snes/interface/snes.c >>> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in >>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in >>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>> [0]PETSC ERROR: #15 TSStep() line 3809 in /home/hartig/petsc/src/ts/inte >>> rface/ts.c >>> >>> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fangbowa at buffalo.edu Fri Mar 3 15:31:59 2017 From: fangbowa at buffalo.edu (Fangbo Wang) Date: Fri, 3 Mar 2017 16:31:59 -0500 Subject: [petsc-users] How can I do matrix addition with different nonzeros patterns correctly? Message-ID: Hi, I am a little bit confused on how to appropriately do matrix addition with different nonzeros patterns. Suppose I want to do D=2*A+3*B+4*C, A, B and C all have different nonzero patterns. I know I can use MatDuplicate, MatCopy, MatConvert to create a matrix D, which way is the right way? What's the difference between MatDuplicate and MatCopy? Thank you very much! Best regards, Fangbo -- Fangbo Wang, PhD student Stochastic Geomechanics Research Group Department of Civil, Structural and Environmental Engineering University at Buffalo Email: *fangbowa at buffalo.edu * -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Mar 3 15:55:23 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 3 Mar 2017 15:55:23 -0600 Subject: [petsc-users] How can I do matrix addition with different nonzeros patterns correctly? In-Reply-To: References: Message-ID: > On Mar 3, 2017, at 3:31 PM, Fangbo Wang wrote: > > Hi, > > I am a little bit confused on how to appropriately do matrix addition with different nonzeros patterns. > > Suppose I want to do D=2*A+3*B+4*C, A, B and C all have different nonzero patterns. > I know I can use MatDuplicate, MatCopy, MatConvert to create a matrix D, which way is the right way? There is no particular "right way". You could use a MatDuplicate() then a MatScale and then two MatAXPY() D=2*A+3*B+4*C looks like a MATLAB thing, not something you would need to do when solving PDEs, where do you get this need? Perhaps there is an alternative way to get what you want. > > What's the difference between MatDuplicate and MatCopy? MatDuplicate() CREATES a new matrix while MatCopy() copies values from an already existing matrix to another already existing matrix. > > Thank you very much! > > Best regards, > > Fangbo > > > -- > Fangbo Wang, PhD student > Stochastic Geomechanics Research Group > Department of Civil, Structural and Environmental Engineering > University at Buffalo > Email: fangbowa at buffalo.edu From imilian.hartig at gmail.com Fri Mar 3 16:07:55 2017 From: imilian.hartig at gmail.com (Maximilian Hartig) Date: Fri, 3 Mar 2017 23:07:55 +0100 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> Message-ID: Yes Sander, your assessment is correct. I use DMPlex and specify the BC using DMLabel. I do however get this error also when running in serial. Thanks, Max > On 3 Mar 2017, at 22:14, Matthew Knepley wrote: > > On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens > wrote: > Max, > > I'm assuming you use DMPlex for your mesh? If so, did you only specify the faces in the DMLabel (and not vertices or edges). Do you get this error only in parallel? > > If so, I can confirm this bug. I submitted a pull request for this yesterday. > > Yep, I saw Sander's pull request. I will get in merged in tomorrow when I get home to Houston. > > Thanks, > > Matt > > On 3 March 2017 at 18:43, Lukas van de Wiel > wrote: > You have apparently preallocated the non-zeroes of you matrix, and the room was insufficient to accommodate all your equations. > > What happened after you tried: > > MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) > > > Cheers > Lukas > > On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig > wrote: > Hello, > > I am working on a transient structural FEM code with PETSc. I managed to create a slow but functioning program with the use of petscFE and a TS solver. The code runs fine until I try to restrict movement in all three spatial directions for one face. I then get the error which is attached below. > So apparently DMPlexMatSetClosure tries to write/read beyond what was priorly allocated. I do however not call MatSeqAIJSetPreallocation myself in the code. So I?m unsure where to start looking for the bug. In my understanding, PETSc should know from the DM how much space to allocate. > Could you kindly give me a hint? > > Thanks, > > Max > > 0 SNES Function norm 2.508668036663e-06 > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Argument out of range > [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc > Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn off this check > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc GIT Date: 2017-02-28 13:41:43 -0600 > [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by hartig Fri Mar 3 17:55:57 2017 > [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort --download-ml > [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in /home/hartig/petsc/src/mat/impls/aij/seq/aij.c > [0]PETSC ERROR: #2 MatSetValues() line 1270 in /home/hartig/petsc/src/mat/interface/matrix.c > [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure > [0]mat for sieve point 60 > [0]mat row indices[0] = 41754 > [0]mat row indices[1] = 41755 > [0]mat row indices[2] = 41756 > [0]mat row indices[3] = 41760 > [0]mat row indices[4] = 41761 > [0]mat row indices[5] = 41762 > [0]mat row indices[6] = 41766 > [0]mat row indices[7] = -41768 > [0]mat row indices[8] = 41767 > [0]mat row indices[9] = 41771 > [0]mat row indices[10] = -41773 > [0]mat row indices[11] = 41772 > [0]mat row indices[12] = 41776 > [0]mat row indices[13] = 41777 > [0]mat row indices[14] = 41778 > [0]mat row indices[15] = 41782 > [0]mat row indices[16] = -41784 > [0]mat row indices[17] = 41783 > [0]mat row indices[18] = 261 > [0]mat row indices[19] = -263 > [0]mat row indices[20] = 262 > [0]mat row indices[21] = 24318 > [0]mat row indices[22] = 24319 > [0]mat row indices[23] = 24320 > [0]mat row indices[24] = -7 > [0]mat row indices[25] = -8 > [0]mat row indices[26] = 6 > [0]mat row indices[27] = 1630 > [0]mat row indices[28] = -1632 > [0]mat row indices[29] = 1631 > [0]mat row indices[30] = 41757 > [0]mat row indices[31] = 41758 > [0]mat row indices[32] = 41759 > [0]mat row indices[33] = 41763 > [0]mat row indices[34] = 41764 > [0]mat row indices[35] = 41765 > [0]mat row indices[36] = 41768 > [0]mat row indices[37] = 41769 > [0]mat row indices[38] = 41770 > [0]mat row indices[39] = 41773 > [0]mat row indices[40] = 41774 > [0]mat row indices[41] = 41775 > [0]mat row indices[42] = 41779 > [0]mat row indices[43] = 41780 > [0]mat row indices[44] = 41781 > [0]mat row indices[45] = 41784 > [0]mat row indices[46] = 41785 > [0]mat row indices[47] = 41786 > [0]mat row indices[48] = 263 > [0]mat row indices[49] = 264 > [0]mat row indices[50] = 265 > [0]mat row indices[51] = 24321 > [0]mat row indices[52] = 24322 > [0]mat row indices[53] = 24323 > [0]mat row indices[54] = 5 > [0]mat row indices[55] = 6 > [0]mat row indices[56] = 7 > [0]mat row indices[57] = 1632 > [0]mat row indices[58] = 1633 > [0]mat row indices[59] = 1634 > [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. > [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. > [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 > [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. > [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. > [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 > [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. > [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. > [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 > [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. > [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. > [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 > [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. > [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. > [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 > [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. > [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. > [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 > [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. > [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. > [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 > [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. > [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. > [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 > [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. > [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 -0.0239537 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 0.038569 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 -0.045728 -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 0.0141688 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. > [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 > [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. > [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. > [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 > [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. > [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. > [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 > [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. > [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. > [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 > [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. > [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. > [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 > [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. > [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. > [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 > [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. > [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. > [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 > [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. > [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. > [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 > [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. > [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. > [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 > [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. > [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. > [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 > [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. > [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. > [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 > [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. > [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. > [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 > [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in /home/hartig/petsc/src/dm/impls/plex/plex.c > [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in /home/hartig/petsc/src/snes/utils/dmplexsnes.c > [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in /home/hartig/petsc/src/ts/utils/dmplexts.c > [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in /home/hartig/petsc/src/ts/utils/dmlocalts.c > [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in /home/hartig/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c > [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in /home/hartig/petsc/src/ts/interface/ts.c > [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in /home/hartig/petsc/src/snes/interface/snes.c > [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in /home/hartig/petsc/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #12 SNESSolve() line 3967 in /home/hartig/petsc/src/snes/interface/snes.c > [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c > [0]PETSC ERROR: #14 TSStep_Theta() line 211 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c > [0]PETSC ERROR: #15 TSStep() line 3809 in /home/hartig/petsc/src/ts/interface/ts.c > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fangbowa at buffalo.edu Fri Mar 3 16:12:09 2017 From: fangbowa at buffalo.edu (Fangbo Wang) Date: Fri, 3 Mar 2017 17:12:09 -0500 Subject: [petsc-users] How can I do matrix addition with different nonzeros patterns correctly? In-Reply-To: References: Message-ID: I am doing analysis on wave propagation through a linear solid media using finite element method. The PDE from the system can be discretized to a system of linear equations. Newmark method is used to solve this problem with changing waves along time. Here, the A, B, C, D mean stiffness matrix, mass matrix, damping matrix, effective stiffness matrix of the system, respectively. The scalars are just some random numbers I put. On Fri, Mar 3, 2017 at 4:55 PM, Barry Smith wrote: > > > On Mar 3, 2017, at 3:31 PM, Fangbo Wang wrote: > > > > Hi, > > > > I am a little bit confused on how to appropriately do matrix addition > with different nonzeros patterns. > > > > Suppose I want to do D=2*A+3*B+4*C, A, B and C all have different > nonzero patterns. > > I know I can use MatDuplicate, MatCopy, MatConvert to create a matrix D, > which way is the right way? > > There is no particular "right way". You could use a MatDuplicate() then > a MatScale and then two MatAXPY() > > D=2*A+3*B+4*C looks like a MATLAB thing, not something you would need > to do when solving PDEs, where do you get this need? Perhaps there is an > alternative way to get what you want. > > > > > What's the difference between MatDuplicate and MatCopy? > > MatDuplicate() CREATES a new matrix while MatCopy() copies values from > an already existing matrix to another already existing matrix. > > > > > > Thank you very much! > > > > Best regards, > > > > Fangbo > > > > > > -- > > Fangbo Wang, PhD student > > Stochastic Geomechanics Research Group > > Department of Civil, Structural and Environmental Engineering > > University at Buffalo > > Email: fangbowa at buffalo.edu > > -- Fangbo Wang, PhD student Stochastic Geomechanics Research Group Department of Civil, Structural and Environmental Engineering University at Buffalo Email: *fangbowa at buffalo.edu * -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sander.Arens at ugent.be Sat Mar 4 04:34:24 2017 From: Sander.Arens at ugent.be (Sander Arens) Date: Sat, 4 Mar 2017 11:34:24 +0100 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> Message-ID: Hmm, strange you also get the error in serial. Can you maybe send a minimal working which demonstrates the error? Thanks, Sander On 3 March 2017 at 23:07, Maximilian Hartig wrote: > Yes Sander, your assessment is correct. I use DMPlex and specify the BC > using DMLabel. I do however get this error also when running in serial. > > Thanks, > Max > > On 3 Mar 2017, at 22:14, Matthew Knepley wrote: > > On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens > wrote: > >> Max, >> >> I'm assuming you use DMPlex for your mesh? If so, did you only specify >> the faces in the DMLabel (and not vertices or edges). Do you get this error >> only in parallel? >> >> If so, I can confirm this bug. I submitted a pull request for this >> yesterday. >> > > Yep, I saw Sander's pull request. I will get in merged in tomorrow when I > get home to Houston. > > Thanks, > > Matt > > >> On 3 March 2017 at 18:43, Lukas van de Wiel >> wrote: >> >>> You have apparently preallocated the non-zeroes of you matrix, and the >>> room was insufficient to accommodate all your equations. >>> >>> What happened after you tried: >>> >>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>> >>> >>> Cheers >>> Lukas >>> >>> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig >> com> wrote: >>> >>>> Hello, >>>> >>>> I am working on a transient structural FEM code with PETSc. I managed >>>> to create a slow but functioning program with the use of petscFE and a TS >>>> solver. The code runs fine until I try to restrict movement in all three >>>> spatial directions for one face. I then get the error which is attached >>>> below. >>>> So apparently DMPlexMatSetClosure tries to write/read beyond what was >>>> priorly allocated. I do however not call MatSeqAIJSetPreallocation myself >>>> in the code. So I?m unsure where to start looking for the bug. In my >>>> understanding, PETSc should know from the DM how much space to allocate. >>>> Could you kindly give me a hint? >>>> >>>> Thanks, >>>> >>>> Max >>>> >>>> 0 SNES Function norm 2.508668036663e-06 >>>> [0]PETSC ERROR: --------------------- Error Message >>>> -------------------------------------------------------------- >>>> [0]PETSC ERROR: Argument out of range >>>> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >>>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to >>>> turn off this check >>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>>> for trouble shooting. >>>> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc >>>> GIT Date: 2017-02-28 13:41:43 -0600 >>>> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by hartig >>>> Fri Mar 3 17:55:57 2017 >>>> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel >>>> --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc >>>> --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc >>>> --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort >>>> --download-ml >>>> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in >>>> /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >>>> [0]PETSC ERROR: #2 MatSetValues() line 1270 in >>>> /home/hartig/petsc/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >>>> [0]mat for sieve point 60 >>>> [0]mat row indices[0] = 41754 >>>> [0]mat row indices[1] = 41755 >>>> [0]mat row indices[2] = 41756 >>>> [0]mat row indices[3] = 41760 >>>> [0]mat row indices[4] = 41761 >>>> [0]mat row indices[5] = 41762 >>>> [0]mat row indices[6] = 41766 >>>> [0]mat row indices[7] = -41768 >>>> [0]mat row indices[8] = 41767 >>>> [0]mat row indices[9] = 41771 >>>> [0]mat row indices[10] = -41773 >>>> [0]mat row indices[11] = 41772 >>>> [0]mat row indices[12] = 41776 >>>> [0]mat row indices[13] = 41777 >>>> [0]mat row indices[14] = 41778 >>>> [0]mat row indices[15] = 41782 >>>> [0]mat row indices[16] = -41784 >>>> [0]mat row indices[17] = 41783 >>>> [0]mat row indices[18] = 261 >>>> [0]mat row indices[19] = -263 >>>> [0]mat row indices[20] = 262 >>>> [0]mat row indices[21] = 24318 >>>> [0]mat row indices[22] = 24319 >>>> [0]mat row indices[23] = 24320 >>>> [0]mat row indices[24] = -7 >>>> [0]mat row indices[25] = -8 >>>> [0]mat row indices[26] = 6 >>>> [0]mat row indices[27] = 1630 >>>> [0]mat row indices[28] = -1632 >>>> [0]mat row indices[29] = 1631 >>>> [0]mat row indices[30] = 41757 >>>> [0]mat row indices[31] = 41758 >>>> [0]mat row indices[32] = 41759 >>>> [0]mat row indices[33] = 41763 >>>> [0]mat row indices[34] = 41764 >>>> [0]mat row indices[35] = 41765 >>>> [0]mat row indices[36] = 41768 >>>> [0]mat row indices[37] = 41769 >>>> [0]mat row indices[38] = 41770 >>>> [0]mat row indices[39] = 41773 >>>> [0]mat row indices[40] = 41774 >>>> [0]mat row indices[41] = 41775 >>>> [0]mat row indices[42] = 41779 >>>> [0]mat row indices[43] = 41780 >>>> [0]mat row indices[44] = 41781 >>>> [0]mat row indices[45] = 41784 >>>> [0]mat row indices[46] = 41785 >>>> [0]mat row indices[47] = 41786 >>>> [0]mat row indices[48] = 263 >>>> [0]mat row indices[49] = 264 >>>> [0]mat row indices[50] = 265 >>>> [0]mat row indices[51] = 24321 >>>> [0]mat row indices[52] = 24322 >>>> [0]mat row indices[53] = 24323 >>>> [0]mat row indices[54] = 5 >>>> [0]mat row indices[55] = 6 >>>> [0]mat row indices[56] = 7 >>>> [0]mat row indices[57] = 1632 >>>> [0]mat row indices[58] = 1633 >>>> [0]mat row indices[59] = 1634 >>>> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 >>>> 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 >>>> 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 >>>> -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 >>>> -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 >>>> -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 >>>> -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 >>>> 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 >>>> -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 >>>> -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 >>>> 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 >>>> -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 >>>> -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >>>> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 >>>> -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 >>>> -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 >>>> -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 >>>> 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 >>>> 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 >>>> -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 >>>> -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 >>>> -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 >>>> -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 >>>> 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 >>>> -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 >>>> -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 >>>> 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 >>>> 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 >>>> 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 >>>> 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 >>>> -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 >>>> 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 >>>> -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 >>>> 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 >>>> 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 3.12405e-14 >>>> -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 9.55728e-18 >>>> -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 0.0250605 0. >>>> 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 -1.47451e-17 >>>> -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 >>>> 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. >>>> 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 >>>> -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 >>>> 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 >>>> -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 >>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 >>>> 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 >>>> 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 >>>> 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 >>>> -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 >>>> -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 >>>> 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 >>>> -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 >>>> 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 >>>> -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 >>>> -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. >>>> 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. >>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>> -1.04322e-11 0. 0. >>>> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 >>>> -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 >>>> 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 >>>> 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. >>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>> -1.56483e-11 0. 0. -1.04322e-11 0. >>>> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 >>>> 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 >>>> -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. >>>> -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 >>>> 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. >>>> 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>> 0. -1.04322e-11 >>>> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 -2.39586 >>>> -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 -0.0347447 >>>> -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 -0.0344031 >>>> 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 -0.0437857 >>>> -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. >>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 >>>> -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 >>>> 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 >>>> -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 >>>> -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>>> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 >>>> 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 >>>> -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 >>>> 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 >>>> -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >>>> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 >>>> 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 >>>> -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 >>>> 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 >>>> -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. >>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 >>>> 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 >>>> 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 >>>> -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 >>>> 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>> 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>>> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 >>>> 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 >>>> -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 >>>> 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 >>>> -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>> 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>> 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. >>>> 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 >>>> 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. >>>> 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>> 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>> 2.60805e-12 0. 0. >>>> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 >>>> 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 >>>> 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 >>>> 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>> -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>> 2.60805e-12 0. >>>> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 >>>> 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 >>>> 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 >>>> 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 >>>> 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 >>>> 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 >>>> 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 >>>> 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 -0.0239537 >>>> 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 0.038569 >>>> 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 -0.045728 >>>> -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 0.0141688 >>>> 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >>>> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 >>>> -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 >>>> 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 >>>> -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 >>>> 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >>>> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 >>>> 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 >>>> 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 >>>> -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. >>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. >>>> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 >>>> 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 >>>> -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 >>>> -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 >>>> 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >>>> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 >>>> -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. >>>> -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 >>>> 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 >>>> 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>> 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>> 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >>>> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>> -9.96708e-10 0. 0. >>>> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>> -9.96708e-10 0. >>>> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>> -9.96708e-10 >>>> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>> -9.96708e-10 0. 0. >>>> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>> -9.96708e-10 0. >>>> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>> -9.96708e-10 >>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>> -9.96708e-10 0. 0. >>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>> -9.96708e-10 0. >>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>> -9.96708e-10 >>>> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>> -6.64472e-10 0. 0. >>>> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>> -6.64472e-10 0. >>>> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>> -6.64472e-10 >>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>> -6.64472e-10 0. 0. >>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>> -6.64472e-10 0. >>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>> -6.64472e-10 >>>> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>> -6.64472e-10 0. 0. >>>> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>> -6.64472e-10 0. >>>> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>> -6.64472e-10 >>>> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 >>>> 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 >>>> 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>>> 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >>>> 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>>> 0. >>>> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>> 1.66118e-10 0. >>>> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>> 1.66118e-10 >>>> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 >>>> 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 >>>> 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >>>> 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>>> 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>>> 0. >>>> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>> 1.66118e-10 0. >>>> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>> 1.66118e-10 >>>> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 >>>> 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 >>>> 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>>> 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>>> 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. >>>> 0. >>>> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>> 1.66118e-10 0. >>>> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>> 1.66118e-10 >>>> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 >>>> 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 >>>> 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >>>> 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >>>> 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. >>>> 0. >>>> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>> 9.96708e-10 0. >>>> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>> 9.96708e-10 >>>> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in >>>> /home/hartig/petsc/src/dm/impls/plex/plex.c >>>> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in >>>> /home/hartig/petsc/src/snes/utils/dmplexsnes.c >>>> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in >>>> /home/hartig/petsc/src/ts/utils/dmplexts.c >>>> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in >>>> /home/hartig/petsc/src/ts/utils/dmlocalts.c >>>> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in >>>> /home/hartig/petsc/src/ts/interface/ts.c >>>> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in >>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in >>>> /home/hartig/petsc/src/ts/interface/ts.c >>>> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in >>>> /home/hartig/petsc/src/snes/interface/snes.c >>>> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in >>>> /home/hartig/petsc/src/snes/impls/ls/ls.c >>>> [0]PETSC ERROR: #12 SNESSolve() line 3967 in >>>> /home/hartig/petsc/src/snes/interface/snes.c >>>> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in >>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in >>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>> [0]PETSC ERROR: #15 TSStep() line 3809 in /home/hartig/petsc/src/ts/inte >>>> rface/ts.c >>>> >>>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Mar 4 09:49:20 2017 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 4 Mar 2017 10:49:20 -0500 Subject: [petsc-users] How can I do matrix addition with different nonzeros patterns correctly? In-Reply-To: References: Message-ID: On Fri, Mar 3, 2017 at 5:12 PM, Fangbo Wang wrote: > I am doing analysis on wave propagation through a linear solid media using > finite element method. The PDE from the system can be discretized to a > system of linear equations. > Newmark method is used to solve this problem with changing waves along > time. > > Here, the A, B, C, D mean stiffness matrix, mass matrix, damping matrix, > effective stiffness matrix of the system, respectively. The scalars are > just some random numbers I put. > Since this is explicit, you should just be assembling the entire system directly in to one matrix, rather than making several matrices and doing algebra. Matt > On Fri, Mar 3, 2017 at 4:55 PM, Barry Smith wrote: > >> >> > On Mar 3, 2017, at 3:31 PM, Fangbo Wang wrote: >> > >> > Hi, >> > >> > I am a little bit confused on how to appropriately do matrix addition >> with different nonzeros patterns. >> > >> > Suppose I want to do D=2*A+3*B+4*C, A, B and C all have different >> nonzero patterns. >> > I know I can use MatDuplicate, MatCopy, MatConvert to create a matrix >> D, which way is the right way? >> >> There is no particular "right way". You could use a MatDuplicate() >> then a MatScale and then two MatAXPY() >> >> D=2*A+3*B+4*C looks like a MATLAB thing, not something you would need >> to do when solving PDEs, where do you get this need? Perhaps there is an >> alternative way to get what you want. >> >> > >> > What's the difference between MatDuplicate and MatCopy? >> >> MatDuplicate() CREATES a new matrix while MatCopy() copies values from >> an already existing matrix to another already existing matrix. >> >> >> > >> > Thank you very much! >> > >> > Best regards, >> > >> > Fangbo >> > >> > >> > -- >> > Fangbo Wang, PhD student >> > Stochastic Geomechanics Research Group >> > Department of Civil, Structural and Environmental Engineering >> > University at Buffalo >> > Email: fangbowa at buffalo.edu >> >> > > > -- > Fangbo Wang, PhD student > Stochastic Geomechanics Research Group > Department of Civil, Structural and Environmental Engineering > University at Buffalo > Email: *fangbowa at buffalo.edu * > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Mar 6 07:43:49 2017 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 6 Mar 2017 08:43:49 -0500 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> Message-ID: On Mon, Mar 6, 2017 at 8:38 AM, Maximilian Hartig wrote: > Of course, please find the source as well as the mesh attached below. I > run with: > > -def_petscspace_order 2 -vel_petscspace_order 2 -snes_monitor > -snes_converged_reason -ksp_converged_reason -ksp_monitor _true_residual > -ksp_type fgmres -pc_type sor > This sounds like over-constraining a point to me. I will try and run it soon, but I have a full schedule this week. The easiest way to see if this is happening should be to print out the Section that gets made -dm_petscsection_view Thanks, Matt > Thanks, > Max > > > > > On 4 Mar 2017, at 11:34, Sander Arens wrote: > > Hmm, strange you also get the error in serial. Can you maybe send a > minimal working which demonstrates the error? > > Thanks, > Sander > > On 3 March 2017 at 23:07, Maximilian Hartig > wrote: > >> Yes Sander, your assessment is correct. I use DMPlex and specify the BC >> using DMLabel. I do however get this error also when running in serial. >> >> Thanks, >> Max >> >> On 3 Mar 2017, at 22:14, Matthew Knepley wrote: >> >> On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens >> wrote: >> >>> Max, >>> >>> I'm assuming you use DMPlex for your mesh? If so, did you only specify >>> the faces in the DMLabel (and not vertices or edges). Do you get this error >>> only in parallel? >>> >>> If so, I can confirm this bug. I submitted a pull request for this >>> yesterday. >>> >> >> Yep, I saw Sander's pull request. I will get in merged in tomorrow when I >> get home to Houston. >> >> Thanks, >> >> Matt >> >> >>> On 3 March 2017 at 18:43, Lukas van de Wiel >> > wrote: >>> >>>> You have apparently preallocated the non-zeroes of you matrix, and the >>>> room was insufficient to accommodate all your equations. >>>> >>>> What happened after you tried: >>>> >>>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>>> >>>> >>>> Cheers >>>> Lukas >>>> >>>> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig < >>>> imilian.hartig at gmail.com> wrote: >>>> >>>>> Hello, >>>>> >>>>> I am working on a transient structural FEM code with PETSc. I managed >>>>> to create a slow but functioning program with the use of petscFE and a TS >>>>> solver. The code runs fine until I try to restrict movement in all three >>>>> spatial directions for one face. I then get the error which is attached >>>>> below. >>>>> So apparently DMPlexMatSetClosure tries to write/read beyond what was >>>>> priorly allocated. I do however not call MatSeqAIJSetPreallocation myself >>>>> in the code. So I?m unsure where to start looking for the bug. In my >>>>> understanding, PETSc should know from the DM how much space to allocate. >>>>> Could you kindly give me a hint? >>>>> >>>>> Thanks, >>>>> >>>>> Max >>>>> >>>>> 0 SNES Function norm 2.508668036663e-06 >>>>> [0]PETSC ERROR: --------------------- Error Message >>>>> -------------------------------------------------------------- >>>>> [0]PETSC ERROR: Argument out of range >>>>> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >>>>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to >>>>> turn off this check >>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/pet >>>>> sc/documentation/faq.html for trouble shooting. >>>>> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc >>>>> GIT Date: 2017-02-28 13:41:43 -0600 >>>>> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by hartig >>>>> Fri Mar 3 17:55:57 2017 >>>>> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel >>>>> --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc >>>>> --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc >>>>> --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort >>>>> --download-ml >>>>> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in >>>>> /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >>>>> [0]PETSC ERROR: #2 MatSetValues() line 1270 in >>>>> /home/hartig/petsc/src/mat/interface/matrix.c >>>>> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >>>>> [0]mat for sieve point 60 >>>>> [0]mat row indices[0] = 41754 >>>>> [0]mat row indices[1] = 41755 >>>>> [0]mat row indices[2] = 41756 >>>>> [0]mat row indices[3] = 41760 >>>>> [0]mat row indices[4] = 41761 >>>>> [0]mat row indices[5] = 41762 >>>>> [0]mat row indices[6] = 41766 >>>>> [0]mat row indices[7] = -41768 >>>>> [0]mat row indices[8] = 41767 >>>>> [0]mat row indices[9] = 41771 >>>>> [0]mat row indices[10] = -41773 >>>>> [0]mat row indices[11] = 41772 >>>>> [0]mat row indices[12] = 41776 >>>>> [0]mat row indices[13] = 41777 >>>>> [0]mat row indices[14] = 41778 >>>>> [0]mat row indices[15] = 41782 >>>>> [0]mat row indices[16] = -41784 >>>>> [0]mat row indices[17] = 41783 >>>>> [0]mat row indices[18] = 261 >>>>> [0]mat row indices[19] = -263 >>>>> [0]mat row indices[20] = 262 >>>>> [0]mat row indices[21] = 24318 >>>>> [0]mat row indices[22] = 24319 >>>>> [0]mat row indices[23] = 24320 >>>>> [0]mat row indices[24] = -7 >>>>> [0]mat row indices[25] = -8 >>>>> [0]mat row indices[26] = 6 >>>>> [0]mat row indices[27] = 1630 >>>>> [0]mat row indices[28] = -1632 >>>>> [0]mat row indices[29] = 1631 >>>>> [0]mat row indices[30] = 41757 >>>>> [0]mat row indices[31] = 41758 >>>>> [0]mat row indices[32] = 41759 >>>>> [0]mat row indices[33] = 41763 >>>>> [0]mat row indices[34] = 41764 >>>>> [0]mat row indices[35] = 41765 >>>>> [0]mat row indices[36] = 41768 >>>>> [0]mat row indices[37] = 41769 >>>>> [0]mat row indices[38] = 41770 >>>>> [0]mat row indices[39] = 41773 >>>>> [0]mat row indices[40] = 41774 >>>>> [0]mat row indices[41] = 41775 >>>>> [0]mat row indices[42] = 41779 >>>>> [0]mat row indices[43] = 41780 >>>>> [0]mat row indices[44] = 41781 >>>>> [0]mat row indices[45] = 41784 >>>>> [0]mat row indices[46] = 41785 >>>>> [0]mat row indices[47] = 41786 >>>>> [0]mat row indices[48] = 263 >>>>> [0]mat row indices[49] = 264 >>>>> [0]mat row indices[50] = 265 >>>>> [0]mat row indices[51] = 24321 >>>>> [0]mat row indices[52] = 24322 >>>>> [0]mat row indices[53] = 24323 >>>>> [0]mat row indices[54] = 5 >>>>> [0]mat row indices[55] = 6 >>>>> [0]mat row indices[56] = 7 >>>>> [0]mat row indices[57] = 1632 >>>>> [0]mat row indices[58] = 1633 >>>>> [0]mat row indices[59] = 1634 >>>>> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 >>>>> 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 >>>>> 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 >>>>> -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 >>>>> -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 >>>>> -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 >>>>> -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 >>>>> 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 >>>>> -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>>> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 >>>>> -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 >>>>> 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 >>>>> -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 >>>>> -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >>>>> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 >>>>> -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 >>>>> -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 >>>>> -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 >>>>> 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 >>>>> 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 >>>>> -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 >>>>> -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 >>>>> -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 >>>>> -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 >>>>> 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 >>>>> -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 >>>>> -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 >>>>> 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>>> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 >>>>> 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 >>>>> 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 >>>>> 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 >>>>> -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 >>>>> 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 >>>>> -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 >>>>> 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 >>>>> 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 >>>>> 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 >>>>> 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 >>>>> 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 >>>>> -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. >>>>> 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>>> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 >>>>> -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 >>>>> 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 >>>>> -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 >>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 >>>>> 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 >>>>> 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 >>>>> 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 >>>>> -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 >>>>> -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 >>>>> 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 >>>>> -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 >>>>> 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 >>>>> -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 >>>>> -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. >>>>> 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. >>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>> -1.04322e-11 0. 0. >>>>> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 >>>>> -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 >>>>> 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 >>>>> 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. >>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>> -1.56483e-11 0. 0. -1.04322e-11 0. >>>>> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 >>>>> 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 >>>>> -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. >>>>> -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 >>>>> 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. >>>>> 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>> 0. -1.04322e-11 >>>>> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 >>>>> -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 >>>>> -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 >>>>> -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 >>>>> -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. >>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 >>>>> -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 >>>>> 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 >>>>> -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 >>>>> -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>>>> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 >>>>> 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 >>>>> -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 >>>>> 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 >>>>> -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >>>>> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 >>>>> 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 >>>>> -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 >>>>> 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 >>>>> -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. >>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 >>>>> 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 >>>>> 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 >>>>> -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 >>>>> 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>> 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>>>> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 >>>>> 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 >>>>> -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 >>>>> 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 >>>>> -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>> 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>>> 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. >>>>> 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 >>>>> 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. >>>>> 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>> 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>> 2.60805e-12 0. 0. >>>>> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 >>>>> 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 >>>>> 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 >>>>> 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>> -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>> 2.60805e-12 0. >>>>> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 >>>>> 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 >>>>> 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 >>>>> 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 >>>>> 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 >>>>> 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 >>>>> 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 >>>>> 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 -0.0239537 >>>>> 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 0.038569 >>>>> 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 -0.045728 >>>>> -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 0.0141688 >>>>> 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >>>>> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 >>>>> -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 >>>>> 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 >>>>> -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 >>>>> 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >>>>> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 >>>>> 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 >>>>> 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 >>>>> -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. >>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. >>>>> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 >>>>> 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 >>>>> -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 >>>>> -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 >>>>> 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >>>>> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 >>>>> -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. >>>>> -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 >>>>> 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 >>>>> 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>> 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>> 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >>>>> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>> -9.96708e-10 0. 0. >>>>> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>> -9.96708e-10 0. >>>>> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>> -9.96708e-10 >>>>> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>> -9.96708e-10 0. 0. >>>>> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>> -9.96708e-10 0. >>>>> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>> -9.96708e-10 >>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>> -9.96708e-10 0. 0. >>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>> -9.96708e-10 0. >>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>> -9.96708e-10 >>>>> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>> -6.64472e-10 0. 0. >>>>> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>> -6.64472e-10 0. >>>>> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>> -6.64472e-10 >>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>> -6.64472e-10 0. 0. >>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>> -6.64472e-10 0. >>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>> -6.64472e-10 >>>>> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>> -6.64472e-10 0. 0. >>>>> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>> -6.64472e-10 0. >>>>> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>> -6.64472e-10 >>>>> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 >>>>> 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 >>>>> 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>>>> 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >>>>> 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>>>> 0. >>>>> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>> 1.66118e-10 0. >>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>> 1.66118e-10 >>>>> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 >>>>> 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 >>>>> 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >>>>> 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>>>> 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>>>> 0. >>>>> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>> 1.66118e-10 0. >>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>> 1.66118e-10 >>>>> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 >>>>> 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 >>>>> 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>>>> 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>>>> 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. >>>>> 0. >>>>> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>> 1.66118e-10 0. >>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>> 1.66118e-10 >>>>> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 >>>>> 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 >>>>> 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >>>>> 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >>>>> 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. >>>>> 0. >>>>> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>> 9.96708e-10 0. >>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>> 9.96708e-10 >>>>> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in >>>>> /home/hartig/petsc/src/dm/impls/plex/plex.c >>>>> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in >>>>> /home/hartig/petsc/src/snes/utils/dmplexsnes.c >>>>> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in >>>>> /home/hartig/petsc/src/ts/utils/dmplexts.c >>>>> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in >>>>> /home/hartig/petsc/src/ts/utils/dmlocalts.c >>>>> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in >>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in >>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in >>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in >>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in >>>>> /home/hartig/petsc/src/snes/impls/ls/ls.c >>>>> [0]PETSC ERROR: #12 SNESSolve() line 3967 in >>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in >>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in >>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>> [0]PETSC ERROR: #15 TSStep() line 3809 in >>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>> >>>>> >>>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From imilian.hartig at gmail.com Mon Mar 6 07:38:29 2017 From: imilian.hartig at gmail.com (Maximilian Hartig) Date: Mon, 6 Mar 2017 14:38:29 +0100 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> Message-ID: Of course, please find the source as well as the mesh attached below. I run with: -def_petscspace_order 2 -vel_petscspace_order 2 -snes_monitor -snes_converged_reason -ksp_converged_reason -ksp_monitor _true_residual -ksp_type fgmres -pc_type sor Thanks, Max > On 4 Mar 2017, at 11:34, Sander Arens wrote: > > Hmm, strange you also get the error in serial. Can you maybe send a minimal working which demonstrates the error? > > Thanks, > Sander > > On 3 March 2017 at 23:07, Maximilian Hartig > wrote: > Yes Sander, your assessment is correct. I use DMPlex and specify the BC using DMLabel. I do however get this error also when running in serial. > > Thanks, > Max > >> On 3 Mar 2017, at 22:14, Matthew Knepley > wrote: >> >> On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens > wrote: >> Max, >> >> I'm assuming you use DMPlex for your mesh? If so, did you only specify the faces in the DMLabel (and not vertices or edges). Do you get this error only in parallel? >> >> If so, I can confirm this bug. I submitted a pull request for this yesterday. >> >> Yep, I saw Sander's pull request. I will get in merged in tomorrow when I get home to Houston. >> >> Thanks, >> >> Matt >> >> On 3 March 2017 at 18:43, Lukas van de Wiel > wrote: >> You have apparently preallocated the non-zeroes of you matrix, and the room was insufficient to accommodate all your equations. >> >> What happened after you tried: >> >> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >> >> >> Cheers >> Lukas >> >> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig > wrote: >> Hello, >> >> I am working on a transient structural FEM code with PETSc. I managed to create a slow but functioning program with the use of petscFE and a TS solver. The code runs fine until I try to restrict movement in all three spatial directions for one face. I then get the error which is attached below. >> So apparently DMPlexMatSetClosure tries to write/read beyond what was priorly allocated. I do however not call MatSeqAIJSetPreallocation myself in the code. So I?m unsure where to start looking for the bug. In my understanding, PETSc should know from the DM how much space to allocate. >> Could you kindly give me a hint? >> >> Thanks, >> >> Max >> >> 0 SNES Function norm 2.508668036663e-06 >> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >> [0]PETSC ERROR: Argument out of range >> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn off this check >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc GIT Date: 2017-02-28 13:41:43 -0600 >> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by hartig Fri Mar 3 17:55:57 2017 >> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort --download-ml >> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >> [0]PETSC ERROR: #2 MatSetValues() line 1270 in /home/hartig/petsc/src/mat/interface/matrix.c >> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >> [0]mat for sieve point 60 >> [0]mat row indices[0] = 41754 >> [0]mat row indices[1] = 41755 >> [0]mat row indices[2] = 41756 >> [0]mat row indices[3] = 41760 >> [0]mat row indices[4] = 41761 >> [0]mat row indices[5] = 41762 >> [0]mat row indices[6] = 41766 >> [0]mat row indices[7] = -41768 >> [0]mat row indices[8] = 41767 >> [0]mat row indices[9] = 41771 >> [0]mat row indices[10] = -41773 >> [0]mat row indices[11] = 41772 >> [0]mat row indices[12] = 41776 >> [0]mat row indices[13] = 41777 >> [0]mat row indices[14] = 41778 >> [0]mat row indices[15] = 41782 >> [0]mat row indices[16] = -41784 >> [0]mat row indices[17] = 41783 >> [0]mat row indices[18] = 261 >> [0]mat row indices[19] = -263 >> [0]mat row indices[20] = 262 >> [0]mat row indices[21] = 24318 >> [0]mat row indices[22] = 24319 >> [0]mat row indices[23] = 24320 >> [0]mat row indices[24] = -7 >> [0]mat row indices[25] = -8 >> [0]mat row indices[26] = 6 >> [0]mat row indices[27] = 1630 >> [0]mat row indices[28] = -1632 >> [0]mat row indices[29] = 1631 >> [0]mat row indices[30] = 41757 >> [0]mat row indices[31] = 41758 >> [0]mat row indices[32] = 41759 >> [0]mat row indices[33] = 41763 >> [0]mat row indices[34] = 41764 >> [0]mat row indices[35] = 41765 >> [0]mat row indices[36] = 41768 >> [0]mat row indices[37] = 41769 >> [0]mat row indices[38] = 41770 >> [0]mat row indices[39] = 41773 >> [0]mat row indices[40] = 41774 >> [0]mat row indices[41] = 41775 >> [0]mat row indices[42] = 41779 >> [0]mat row indices[43] = 41780 >> [0]mat row indices[44] = 41781 >> [0]mat row indices[45] = 41784 >> [0]mat row indices[46] = 41785 >> [0]mat row indices[47] = 41786 >> [0]mat row indices[48] = 263 >> [0]mat row indices[49] = 264 >> [0]mat row indices[50] = 265 >> [0]mat row indices[51] = 24321 >> [0]mat row indices[52] = 24322 >> [0]mat row indices[53] = 24323 >> [0]mat row indices[54] = 5 >> [0]mat row indices[55] = 6 >> [0]mat row indices[56] = 7 >> [0]mat row indices[57] = 1632 >> [0]mat row indices[58] = 1633 >> [0]mat row indices[59] = 1634 >> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 -0.0239537 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 0.038569 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 -0.045728 -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 0.0141688 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. >> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. >> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. >> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 >> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 >> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. >> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 >> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. >> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 >> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in /home/hartig/petsc/src/dm/impls/plex/plex.c >> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in /home/hartig/petsc/src/snes/utils/dmplexsnes.c >> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in /home/hartig/petsc/src/ts/utils/dmplexts.c >> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in /home/hartig/petsc/src/ts/utils/dmlocalts.c >> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in /home/hartig/petsc/src/ts/interface/ts.c >> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in /home/hartig/petsc/src/ts/interface/ts.c >> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in /home/hartig/petsc/src/snes/interface/snes.c >> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in /home/hartig/petsc/src/snes/impls/ls/ls.c >> [0]PETSC ERROR: #12 SNESSolve() line 3967 in /home/hartig/petsc/src/snes/interface/snes.c >> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >> [0]PETSC ERROR: #15 TSStep() line 3809 in /home/hartig/petsc/src/ts/interface/ts.c >> >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: miniFEM.c Type: application/octet-stream Size: 20178 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nakamura.msh Type: application/octet-stream Size: 1305081 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From fande.kong at inl.gov Mon Mar 6 15:48:00 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Mon, 6 Mar 2017 14:48:00 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? Message-ID: Hi All, I am solving a nonlinear system whose Jacobian matrix has a block structure. More precisely, there is a mesh, and for each vertex there are 11 variables associated with it. I am using BAIJ. I thought block ILU(k) should be more efficient than the point-wise ILU(k). After some numerical experiments, I found that the block ILU(K) is much slower than the point-wise version. Any thoughts? Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Mon Mar 6 16:27:21 2017 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Mon, 6 Mar 2017 14:27:21 -0800 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: Message-ID: On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande wrote: > Hi All, > > I am solving a nonlinear system whose Jacobian matrix has a block structure. > More precisely, there is a mesh, and for each vertex there are 11 variables > associated with it. I am using BAIJ. > > I thought block ILU(k) should be more efficient than the point-wise ILU(k). > After some numerical experiments, I found that the block ILU(K) is much > slower than the point-wise version. Do you mean that it takes more iterations to converge, or that the time per iteration is greater, or both? > > Any thoughts? > > Fande, From fande.kong at inl.gov Mon Mar 6 16:32:05 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Mon, 6 Mar 2017 15:32:05 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: Message-ID: On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan wrote: > On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande wrote: > > Hi All, > > > > I am solving a nonlinear system whose Jacobian matrix has a block > structure. > > More precisely, there is a mesh, and for each vertex there are 11 > variables > > associated with it. I am using BAIJ. > > > > I thought block ILU(k) should be more efficient than the point-wise > ILU(k). > > After some numerical experiments, I found that the block ILU(K) is much > > slower than the point-wise version. > Do you mean that it takes more iterations to converge, or that the > time per iteration is greater, or both? > The number of iterations is very similar, but the timer per iteration is greater. > > > > Any thoughts? > > > > Fande, > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lvella at gmail.com Mon Mar 6 17:05:17 2017 From: lvella at gmail.com (Lucas Clemente Vella) Date: Mon, 6 Mar 2017 20:05:17 -0300 Subject: [petsc-users] How to define blocks for PCFIELDSPLIT? In-Reply-To: <683306C3-ED39-4B94-8150-ECB7AF8DC06C@mcs.anl.gov> References: <683306C3-ED39-4B94-8150-ECB7AF8DC06C@mcs.anl.gov> Message-ID: I tried to split my matrix (attached drawing of the non-null elements) by calling PCFieldSplitSetIS() twice: PCFieldSplitSetIS(pc, "p", zero_to_n); PCFieldSplitSetIS(pc, "u", n_to_m); But upon running with options: -ksp_type preonly -pc_type fieldsplit I get the following error: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Petsc has generated inconsistent data [0]PETSC ERROR: Unhandled case, must have at least two fields, not 1 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.3, Jul, 24, 2016 [0]PETSC ERROR: /home/lvella/src/higtree/examples/read-amr-solve-ns-cfl-example-2d on a x86_64-linux-gnu-real named lvella-workstation by lvella Mon Mar 6 20:01:47 2017 [0]PETSC ERROR: Configure options --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --with-silent-rules=0 --libdir=${prefix}/lib/x86_64-linux-gnu --libexecdir=${prefix}/lib/x86_64-linux-gnu --with-maintainer-mode=0 --with-dependency-tracking=0 --with-debugging=0 --shared-library-extension=_real --with-hypre=1 --with-hypre-dir=/usr --with-clanguage=C++ --with-shared-libraries --with-pic=1 --useThreads=0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-blas-lib=-lblas --with-lapack-lib=-llapack --with-blacs=1 --with-blacs-lib="-lblacsCinit-openmpi -lblacs-openmpi" --with-scalapack=1 --with-scalapack-lib=-lscalapack-openmpi --with-mumps=1 --with-mumps-include="[]" --with-mumps-lib="-ldmumps -lzmumps -lsmumps -lcmumps -lmumps_common -lpord" --with-suitesparse=1 --with-suitesparse-include=/usr/include/suitesparse --with-suitesparse-lib="-lumfpack -lamd -lcholmod -lklu" --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=-lspooles --with-ptscotch=1 --with-ptscotch-include=/usr/include/scotch --with-ptscotch-lib="-lptesmumps -lptscotch -lptscotcherr" --with-fftw=1 --with-fftw-include="[]" --with-fftw-lib="-lfftw3 -lfftw3_mpi" --with-superlu=0 --CXX_LINKER_FLAGS=-Wl,--no-as-needed --prefix=/usr/lib/petscdir/3.7.3/x86_64-linux-gnu-real PETSC_DIR=/build/petsc-fA70UI/petsc-3.7.3.dfsg1 --PETSC_ARCH=x86_64-linux-gnu-real CFLAGS="-g -O2 -fdebug-prefix-map=/build/petsc-fA70UI/petsc-3.7.3.dfsg1=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC" CXXFLAGS="-g -O2 -fdebug-prefix-map=/build/petsc-fA70UI/petsc-3.7.3.dfsg1=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC" FCFLAGS="-g -O2 -fdebug-prefix-map=/build/petsc-fA70UI/petsc-3.7.3.dfsg1=. -fstack-protector-strong -fPIC" FFLAGS="-g -O2 -fdebug-prefix-map=/build/petsc-fA70UI/petsc-3.7.3.dfsg1=. -fstack-protector-strong -fPIC" CPPFLAGS="-Wdate-time -D_FORTIFY_SOURCE=2" LDFLAGS="-Wl,-Bsymbolic-functions -Wl,-z,relro -fPIC" MAKEFLAGS=w [0]PETSC ERROR: #1 PCFieldSplitSetDefaults() line 474 in /build/petsc-fA70UI/petsc-3.7.3.dfsg1/src/ksp/pc/impls/fieldsplit/fieldsplit.c [0]PETSC ERROR: #2 PCSetUp_FieldSplit() line 491 in /build/petsc-fA70UI/petsc-3.7.3.dfsg1/src/ksp/pc/impls/fieldsplit/fieldsplit.c [0]PETSC ERROR: #3 PCSetUp() line 968 in /build/petsc-fA70UI/petsc-3.7.3.dfsg1/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #4 KSPSetUp() line 390 in /build/petsc-fA70UI/petsc-3.7.3.dfsg1/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #5 KSPSolve() line 599 in /build/petsc-fA70UI/petsc-3.7.3.dfsg1/src/ksp/ksp/interface/itfunc.c What is the proper way to setup PCFIELDSPLIT in this case? 2017-03-01 16:26 GMT-03:00 Barry Smith : > > > On Mar 1, 2017, at 12:59 PM, Lucas Clemente Vella > wrote: > > > > I have a parallel AIJ matrix and I know exactly which element belongs to > each one of the 4 submatrices blocks I want to use to solve the linear > system. The blocks are not strided, because they have different number of > elements. > > > > I understand that I must use PCFieldSplitSetIS(), since > PCFieldSplitSetFields() is only for strided blocks. What I don't understand > is how to create the IS structure I must pass to it. > > > > Each matrix coefficient is identified by a pair (i, j), but on IS > creation functions, like ISCreateGeneral() and ISCreateBlock(), I am > supposed to provide a one dimension set of indices. How does these indices > relates to the matrix coefficients? > > PCFieldSplitSetIS() always indicates SQUARE blocks along the diagonal > of the original matrix. Hence you need only one IS to define a block, you > don't need one for the columns and one for the rows. The IS is telling > what rows AND columns you want in the block. > > > > Also, ISCreateGeneral() seems to create a single block, and > ISCreateBlock() seems to create multiple blocks of the same size. > > ISCreateBlock() does not create multi blocks, it creates a single IS > that has "block structure", for example 0,1, 3, 4, 6, 7, 9,10, .... > > How to create multiple blocks with different sizes? > > ISCreateGeneral(). > > > > > > Thanks. > > > > -- > > Lucas Clemente Vella > > lvella at gmail.com > > -- Lucas Clemente Vella lvella at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: matrix.png Type: image/png Size: 24869 bytes Desc: not available URL: From bsmith at mcs.anl.gov Mon Mar 6 17:10:01 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 6 Mar 2017 17:10:01 -0600 Subject: [petsc-users] How to define blocks for PCFIELDSPLIT? In-Reply-To: References: <683306C3-ED39-4B94-8150-ECB7AF8DC06C@mcs.anl.gov> Message-ID: <50E29FF8-E668-4BE5-A444-7128293926E4@mcs.anl.gov> The error is in PCFieldSplitSetDefaults() so it is seemingly ignoring your calls to PCFieldSplitSetIS() make sure those calls come AFTER your call to KSPSetFromOptions() in your code. Barry > On Mar 6, 2017, at 5:05 PM, Lucas Clemente Vella wrote: > > I tried to split my matrix (attached drawing of the non-null elements) by calling PCFieldSplitSetIS() twice: > > PCFieldSplitSetIS(pc, "p", zero_to_n); > PCFieldSplitSetIS(pc, "u", n_to_m); > > But upon running with options: > > -ksp_type preonly -pc_type fieldsplit > > I get the following error: > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Petsc has generated inconsistent data > [0]PETSC ERROR: Unhandled case, must have at least two fields, not 1 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.3, Jul, 24, 2016 > [0]PETSC ERROR: /home/lvella/src/higtree/examples/read-amr-solve-ns-cfl-example-2d on a x86_64-linux-gnu-real named lvella-workstation by lvella Mon Mar 6 20:01:47 2017 > [0]PETSC ERROR: Configure options --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --with-silent-rules=0 --libdir=${prefix}/lib/x86_64-linux-gnu --libexecdir=${prefix}/lib/x86_64-linux-gnu --with-maintainer-mode=0 --with-dependency-tracking=0 --with-debugging=0 --shared-library-extension=_real --with-hypre=1 --with-hypre-dir=/usr --with-clanguage=C++ --with-shared-libraries --with-pic=1 --useThreads=0 --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-blas-lib=-lblas --with-lapack-lib=-llapack --with-blacs=1 --with-blacs-lib="-lblacsCinit-openmpi -lblacs-openmpi" --with-scalapack=1 --with-scalapack-lib=-lscalapack-openmpi --with-mumps=1 --with-mumps-include="[]" --with-mumps-lib="-ldmumps -lzmumps -lsmumps -lcmumps -lmumps_common -lpord" --with-suitesparse=1 --with-suitesparse-include=/usr/include/suitesparse --with-suitesparse-lib="-lumfpack -lamd -lcholmod -lklu" --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=-lspooles --with-ptscotch=1 --with-ptscotch-include=/usr/include/scotch --with-ptscotch-lib="-lptesmumps -lptscotch -lptscotcherr" --with-fftw=1 --with-fftw-include="[]" --with-fftw-lib="-lfftw3 -lfftw3_mpi" --with-superlu=0 --CXX_LINKER_FLAGS=-Wl,--no-as-needed --prefix=/usr/lib/petscdir/3.7.3/x86_64-linux-gnu-real PETSC_DIR=/build/petsc-fA70UI/petsc-3.7.3.dfsg1 --PETSC_ARCH=x86_64-linux-gnu-real CFLAGS="-g -O2 -fdebug-prefix-map=/build/petsc-fA70UI/petsc-3.7.3.dfsg1=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC" CXXFLAGS="-g -O2 -fdebug-prefix-map=/build/petsc-fA70UI/petsc-3.7.3.dfsg1=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC" FCFLAGS="-g -O2 -fdebug-prefix-map=/build/petsc-fA70UI/petsc-3.7.3.dfsg1=. -fstack-protector-strong -fPIC" FFLAGS="-g -O2 -fdebug-prefix-map=/build/petsc-fA70UI/petsc-3.7.3.dfsg1=. -fstack-protector-strong -fPIC" CPPFLAGS="-Wdate-time -D_FORTIFY_SOURCE=2" LDFLAGS="-Wl,-Bsymbolic-functions -Wl,-z,relro -fPIC" MAKEFLAGS=w > [0]PETSC ERROR: #1 PCFieldSplitSetDefaults() line 474 in /build/petsc-fA70UI/petsc-3.7.3.dfsg1/src/ksp/pc/impls/fieldsplit/fieldsplit.c > [0]PETSC ERROR: #2 PCSetUp_FieldSplit() line 491 in /build/petsc-fA70UI/petsc-3.7.3.dfsg1/src/ksp/pc/impls/fieldsplit/fieldsplit.c > [0]PETSC ERROR: #3 PCSetUp() line 968 in /build/petsc-fA70UI/petsc-3.7.3.dfsg1/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #4 KSPSetUp() line 390 in /build/petsc-fA70UI/petsc-3.7.3.dfsg1/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #5 KSPSolve() line 599 in /build/petsc-fA70UI/petsc-3.7.3.dfsg1/src/ksp/ksp/interface/itfunc.c > > What is the proper way to setup PCFIELDSPLIT in this case? > > > 2017-03-01 16:26 GMT-03:00 Barry Smith : > > > On Mar 1, 2017, at 12:59 PM, Lucas Clemente Vella wrote: > > > > I have a parallel AIJ matrix and I know exactly which element belongs to each one of the 4 submatrices blocks I want to use to solve the linear system. The blocks are not strided, because they have different number of elements. > > > > I understand that I must use PCFieldSplitSetIS(), since PCFieldSplitSetFields() is only for strided blocks. What I don't understand is how to create the IS structure I must pass to it. > > > > Each matrix coefficient is identified by a pair (i, j), but on IS creation functions, like ISCreateGeneral() and ISCreateBlock(), I am supposed to provide a one dimension set of indices. How does these indices relates to the matrix coefficients? > > PCFieldSplitSetIS() always indicates SQUARE blocks along the diagonal of the original matrix. Hence you need only one IS to define a block, you don't need one for the columns and one for the rows. The IS is telling what rows AND columns you want in the block. > > > > Also, ISCreateGeneral() seems to create a single block, and ISCreateBlock() seems to create multiple blocks of the same size. > > ISCreateBlock() does not create multi blocks, it creates a single IS that has "block structure", for example 0,1, 3, 4, 6, 7, 9,10, .... > > How to create multiple blocks with different sizes? > > ISCreateGeneral(). > > > > > > Thanks. > > > > -- > > Lucas Clemente Vella > > lvella at gmail.com > > > > > -- > Lucas Clemente Vella > lvella at gmail.com > From bsmith at mcs.anl.gov Mon Mar 6 17:10:04 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 6 Mar 2017 17:10:04 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: Message-ID: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> This is because for block size 11 it is using calls to LAPACK/BLAS for the block operations instead of custom routines for that block size. Here is what you need to do. For a good sized case run both with -log_view and check the time spent in MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and BAIJ. If they have a different number of function calls then divide by the function call count to determine the time per function call. This will tell you which routine needs to be optimized first either MatLUFactorNumeric or MatSolve. My guess is MatSolve. So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the block size of 11. Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 it uses the new routine something like. if (both_identity) { if (b->bs == 11) C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; } else { C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; } Rerun and look at the new -log_view. Send all three -log_view to use at this point. If this optimization helps and now MatLUFactorNumeric is the time sink you can do the process to MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block custom version. Barry > On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: > > > > On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan wrote: > On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande wrote: > > Hi All, > > > > I am solving a nonlinear system whose Jacobian matrix has a block structure. > > More precisely, there is a mesh, and for each vertex there are 11 variables > > associated with it. I am using BAIJ. > > > > I thought block ILU(k) should be more efficient than the point-wise ILU(k). > > After some numerical experiments, I found that the block ILU(K) is much > > slower than the point-wise version. > Do you mean that it takes more iterations to converge, or that the > time per iteration is greater, or both? > > The number of iterations is very similar, but the timer per iteration is greater. > > > > > > Any thoughts? > > > > Fande, > From bsmith at mcs.anl.gov Mon Mar 6 17:14:07 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 6 Mar 2017 17:14:07 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> Message-ID: Note also that if the 11 by 11 blocks are actually sparse (and you don't store all the zeros in the blocks in the AIJ format) then then AIJ non-block factorization involves less floating point operations and less memory access so can be faster than the BAIJ format, depending on "how sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with AIJ (with zeros maybe in certain locations) then the above is not true. > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: > > > This is because for block size 11 it is using calls to LAPACK/BLAS for the block operations instead of custom routines for that block size. > > Here is what you need to do. For a good sized case run both with -log_view and check the time spent in > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and BAIJ. If they have a different number of function calls then divide by the function call count to determine the time per function call. > > This will tell you which routine needs to be optimized first either MatLUFactorNumeric or MatSolve. My guess is MatSolve. > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the block size of 11. > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 it uses the new routine something like. > > if (both_identity) { > if (b->bs == 11) > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > } else { > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > } > > Rerun and look at the new -log_view. Send all three -log_view to use at this point. If this optimization helps and now > MatLUFactorNumeric is the time sink you can do the process to MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block custom version. > > Barry > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: >> >> >> >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan wrote: >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande wrote: >>> Hi All, >>> >>> I am solving a nonlinear system whose Jacobian matrix has a block structure. >>> More precisely, there is a mesh, and for each vertex there are 11 variables >>> associated with it. I am using BAIJ. >>> >>> I thought block ILU(k) should be more efficient than the point-wise ILU(k). >>> After some numerical experiments, I found that the block ILU(K) is much >>> slower than the point-wise version. >> Do you mean that it takes more iterations to converge, or that the >> time per iteration is greater, or both? >> >> The number of iterations is very similar, but the timer per iteration is greater. >> >> >>> >>> Any thoughts? >>> >>> Fande, >> > From fande.kong at inl.gov Mon Mar 6 17:44:00 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Mon, 6 Mar 2017 16:44:00 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> Message-ID: Thanks, Barry, Log info: AIJ: MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 BAIJ: MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 It looks like both MatSolve and MatLUFactorNum are slower. I will try your suggestions. Fande On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith wrote: > > Note also that if the 11 by 11 blocks are actually sparse (and you don't > store all the zeros in the blocks in the AIJ format) then then AIJ > non-block factorization involves less floating point operations and less > memory access so can be faster than the BAIJ format, depending on "how > sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with > AIJ (with zeros maybe in certain locations) then the above is not true. > > > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: > > > > > > This is because for block size 11 it is using calls to LAPACK/BLAS for > the block operations instead of custom routines for that block size. > > > > Here is what you need to do. For a good sized case run both with > -log_view and check the time spent in > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and > BAIJ. If they have a different number of function calls then divide by the > function call count to determine the time per function call. > > > > This will tell you which routine needs to be optimized first either > MatLUFactorNumeric or MatSolve. My guess is MatSolve. > > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the function > MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function > MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the > block size of 11. > > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 it > uses the new routine something like. > > > > if (both_identity) { > > if (b->bs == 11) > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > > } else { > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > > } > > > > Rerun and look at the new -log_view. Send all three -log_view to use > at this point. If this optimization helps and now > > MatLUFactorNumeric is the time sink you can do the process to > MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block > custom version. > > > > Barry > > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: > >> > >> > >> > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan > wrote: > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande wrote: > >>> Hi All, > >>> > >>> I am solving a nonlinear system whose Jacobian matrix has a block > structure. > >>> More precisely, there is a mesh, and for each vertex there are 11 > variables > >>> associated with it. I am using BAIJ. > >>> > >>> I thought block ILU(k) should be more efficient than the point-wise > ILU(k). > >>> After some numerical experiments, I found that the block ILU(K) is much > >>> slower than the point-wise version. > >> Do you mean that it takes more iterations to converge, or that the > >> time per iteration is greater, or both? > >> > >> The number of iterations is very similar, but the timer per iteration > is greater. > >> > >> > >>> > >>> Any thoughts? > >>> > >>> Fande, > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Mar 6 21:08:08 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 6 Mar 2017 21:08:08 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> Message-ID: <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> Thanks. Even the symbolic is slower for BAIJ. I don't like that, it definitely should not be since it is (at least should be) doing a symbolic factorization on a symbolic matrix 1/11th the size! Keep us informed. > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: > > Thanks, Barry, > > Log info: > > AIJ: > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > BAIJ: > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > It looks like both MatSolve and MatLUFactorNum are slower. > > I will try your suggestions. > > Fande > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith wrote: > > Note also that if the 11 by 11 blocks are actually sparse (and you don't store all the zeros in the blocks in the AIJ format) then then AIJ non-block factorization involves less floating point operations and less memory access so can be faster than the BAIJ format, depending on "how sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with AIJ (with zeros maybe in certain locations) then the above is not true. > > > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: > > > > > > This is because for block size 11 it is using calls to LAPACK/BLAS for the block operations instead of custom routines for that block size. > > > > Here is what you need to do. For a good sized case run both with -log_view and check the time spent in > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and BAIJ. If they have a different number of function calls then divide by the function call count to determine the time per function call. > > > > This will tell you which routine needs to be optimized first either MatLUFactorNumeric or MatSolve. My guess is MatSolve. > > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the block size of 11. > > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 it uses the new routine something like. > > > > if (both_identity) { > > if (b->bs == 11) > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > > } else { > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > > } > > > > Rerun and look at the new -log_view. Send all three -log_view to use at this point. If this optimization helps and now > > MatLUFactorNumeric is the time sink you can do the process to MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block custom version. > > > > Barry > > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: > >> > >> > >> > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan wrote: > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande wrote: > >>> Hi All, > >>> > >>> I am solving a nonlinear system whose Jacobian matrix has a block structure. > >>> More precisely, there is a mesh, and for each vertex there are 11 variables > >>> associated with it. I am using BAIJ. > >>> > >>> I thought block ILU(k) should be more efficient than the point-wise ILU(k). > >>> After some numerical experiments, I found that the block ILU(K) is much > >>> slower than the point-wise version. > >> Do you mean that it takes more iterations to converge, or that the > >> time per iteration is greater, or both? > >> > >> The number of iterations is very similar, but the timer per iteration is greater. > >> > >> > >>> > >>> Any thoughts? > >>> > >>> Fande, > >> > > > > From imilian.hartig at gmail.com Tue Mar 7 03:28:12 2017 From: imilian.hartig at gmail.com (Maximilian Hartig) Date: Tue, 7 Mar 2017 10:28:12 +0100 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> Message-ID: <69E95D27-72CD-4BFA-AC62-907727998DCB@gmail.com> It seems you are correct. In theory, the problem should not be over constrained. It is 1/4 of a simple hollow cylinder geometry with rotational symmetry around the z-axis. I restrict movement completely on the upper and lower (z) end as well as movement in x- and y- direction respectively on the symmetry planes. I am not completely sure what I am looking at with the output of -dm_petscsection_view. But these lines struck me as odd: (5167) dim 3 offset 0 constrained 0 1 1 2 (5168) dim 3 offset 6 constrained 0 1 1 2 . . . (5262) dim 3 offset 0 constrained 0 0 1 2 (5263) dim 3 offset 6 constrained 0 0 1 2 It seems that vertices that are part of the closures of both Face Sets get restricted twice in their respective degree of freedom. This does however also happen when restricting movement in x- direction only for upper and lower faces. In that case without the solver producing an error: (20770) dim 3 offset 24 constrained 0 0 (20771) dim 3 offset 30 constrained 0 0 (20772) dim 3 offset 36 constrained 0 0 (20773) dim 3 offset 42 constrained 0 0 Thanks, Max > On 6 Mar 2017, at 14:43, Matthew Knepley wrote: > > On Mon, Mar 6, 2017 at 8:38 AM, Maximilian Hartig > wrote: > Of course, please find the source as well as the mesh attached below. I run with: > > -def_petscspace_order 2 -vel_petscspace_order 2 -snes_monitor -snes_converged_reason -ksp_converged_reason -ksp_monitor _true_residual -ksp_type fgmres -pc_type sor > > This sounds like over-constraining a point to me. I will try and run it soon, but I have a full schedule this week. The easiest > way to see if this is happening should be to print out the Section that gets made > > -dm_petscsection_view > > Thanks, > > Matt > > Thanks, > Max > > > > >> On 4 Mar 2017, at 11:34, Sander Arens > wrote: >> >> Hmm, strange you also get the error in serial. Can you maybe send a minimal working which demonstrates the error? >> >> Thanks, >> Sander >> >> On 3 March 2017 at 23:07, Maximilian Hartig > wrote: >> Yes Sander, your assessment is correct. I use DMPlex and specify the BC using DMLabel. I do however get this error also when running in serial. >> >> Thanks, >> Max >> >>> On 3 Mar 2017, at 22:14, Matthew Knepley > wrote: >>> >>> On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens > wrote: >>> Max, >>> >>> I'm assuming you use DMPlex for your mesh? If so, did you only specify the faces in the DMLabel (and not vertices or edges). Do you get this error only in parallel? >>> >>> If so, I can confirm this bug. I submitted a pull request for this yesterday. >>> >>> Yep, I saw Sander's pull request. I will get in merged in tomorrow when I get home to Houston. >>> >>> Thanks, >>> >>> Matt >>> >>> On 3 March 2017 at 18:43, Lukas van de Wiel > wrote: >>> You have apparently preallocated the non-zeroes of you matrix, and the room was insufficient to accommodate all your equations. >>> >>> What happened after you tried: >>> >>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>> >>> >>> Cheers >>> Lukas >>> >>> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig > wrote: >>> Hello, >>> >>> I am working on a transient structural FEM code with PETSc. I managed to create a slow but functioning program with the use of petscFE and a TS solver. The code runs fine until I try to restrict movement in all three spatial directions for one face. I then get the error which is attached below. >>> So apparently DMPlexMatSetClosure tries to write/read beyond what was priorly allocated. I do however not call MatSeqAIJSetPreallocation myself in the code. So I?m unsure where to start looking for the bug. In my understanding, PETSc should know from the DM how much space to allocate. >>> Could you kindly give me a hint? >>> >>> Thanks, >>> >>> Max >>> >>> 0 SNES Function norm 2.508668036663e-06 >>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>> [0]PETSC ERROR: Argument out of range >>> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn off this check >>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc GIT Date: 2017-02-28 13:41:43 -0600 >>> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by hartig Fri Mar 3 17:55:57 2017 >>> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort --download-ml >>> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >>> [0]PETSC ERROR: #2 MatSetValues() line 1270 in /home/hartig/petsc/src/mat/interface/matrix.c >>> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >>> [0]mat for sieve point 60 >>> [0]mat row indices[0] = 41754 >>> [0]mat row indices[1] = 41755 >>> [0]mat row indices[2] = 41756 >>> [0]mat row indices[3] = 41760 >>> [0]mat row indices[4] = 41761 >>> [0]mat row indices[5] = 41762 >>> [0]mat row indices[6] = 41766 >>> [0]mat row indices[7] = -41768 >>> [0]mat row indices[8] = 41767 >>> [0]mat row indices[9] = 41771 >>> [0]mat row indices[10] = -41773 >>> [0]mat row indices[11] = 41772 >>> [0]mat row indices[12] = 41776 >>> [0]mat row indices[13] = 41777 >>> [0]mat row indices[14] = 41778 >>> [0]mat row indices[15] = 41782 >>> [0]mat row indices[16] = -41784 >>> [0]mat row indices[17] = 41783 >>> [0]mat row indices[18] = 261 >>> [0]mat row indices[19] = -263 >>> [0]mat row indices[20] = 262 >>> [0]mat row indices[21] = 24318 >>> [0]mat row indices[22] = 24319 >>> [0]mat row indices[23] = 24320 >>> [0]mat row indices[24] = -7 >>> [0]mat row indices[25] = -8 >>> [0]mat row indices[26] = 6 >>> [0]mat row indices[27] = 1630 >>> [0]mat row indices[28] = -1632 >>> [0]mat row indices[29] = 1631 >>> [0]mat row indices[30] = 41757 >>> [0]mat row indices[31] = 41758 >>> [0]mat row indices[32] = 41759 >>> [0]mat row indices[33] = 41763 >>> [0]mat row indices[34] = 41764 >>> [0]mat row indices[35] = 41765 >>> [0]mat row indices[36] = 41768 >>> [0]mat row indices[37] = 41769 >>> [0]mat row indices[38] = 41770 >>> [0]mat row indices[39] = 41773 >>> [0]mat row indices[40] = 41774 >>> [0]mat row indices[41] = 41775 >>> [0]mat row indices[42] = 41779 >>> [0]mat row indices[43] = 41780 >>> [0]mat row indices[44] = 41781 >>> [0]mat row indices[45] = 41784 >>> [0]mat row indices[46] = 41785 >>> [0]mat row indices[47] = 41786 >>> [0]mat row indices[48] = 263 >>> [0]mat row indices[49] = 264 >>> [0]mat row indices[50] = 265 >>> [0]mat row indices[51] = 24321 >>> [0]mat row indices[52] = 24322 >>> [0]mat row indices[53] = 24323 >>> [0]mat row indices[54] = 5 >>> [0]mat row indices[55] = 6 >>> [0]mat row indices[56] = 7 >>> [0]mat row indices[57] = 1632 >>> [0]mat row indices[58] = 1633 >>> [0]mat row indices[59] = 1634 >>> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >>> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >>> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 -0.0239537 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 0.038569 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 -0.045728 -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 0.0141688 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >>> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >>> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. >>> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >>> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >>> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. >>> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >>> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >>> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >>> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. >>> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >>> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 >>> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 >>> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. >>> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 >>> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. >>> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 >>> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in /home/hartig/petsc/src/dm/impls/plex/plex.c >>> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in /home/hartig/petsc/src/snes/utils/dmplexsnes.c >>> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in /home/hartig/petsc/src/ts/utils/dmplexts.c >>> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in /home/hartig/petsc/src/ts/utils/dmlocalts.c >>> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in /home/hartig/petsc/src/ts/interface/ts.c >>> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in /home/hartig/petsc/src/ts/interface/ts.c >>> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in /home/hartig/petsc/src/snes/interface/snes.c >>> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in /home/hartig/petsc/src/snes/impls/ls/ls.c >>> [0]PETSC ERROR: #12 SNESSolve() line 3967 in /home/hartig/petsc/src/snes/interface/snes.c >>> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>> [0]PETSC ERROR: #15 TSStep() line 3809 in /home/hartig/petsc/src/ts/interface/ts.c >>> >>> >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >> >> > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Mar 7 09:29:35 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 7 Mar 2017 09:29:35 -0600 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: <69E95D27-72CD-4BFA-AC62-907727998DCB@gmail.com> References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> <69E95D27-72CD-4BFA-AC62-907727998DCB@gmail.com> Message-ID: On Tue, Mar 7, 2017 at 3:28 AM, Maximilian Hartig wrote: > It seems you are correct. In theory, the problem should not be over > constrained. It is 1/4 of a simple hollow cylinder geometry with rotational > symmetry around the z-axis. I restrict movement completely on the upper and > lower (z) end as well as movement in x- and y- direction respectively on > the symmetry planes. > I am not completely sure what I am looking at with the output of > -dm_petscsection_view. But these lines struck me as odd: > > > (5167) dim 3 offset 0 constrained 0 1 1 2 > (5168) dim 3 offset 6 constrained 0 1 1 2 > . > . > . > (5262) dim 3 offset 0 constrained 0 0 1 2 > (5263) dim 3 offset 6 constrained 0 0 1 2 > > > It seems that vertices that are part of the closures of both Face Sets get > restricted twice in their respective degree of freedom. > Yes, that is exactly what happens. > This does however also happen when restricting movement in x- direction > only for upper and lower faces. In that case without the solver producing > an error: > (20770) dim 3 offset 24 constrained 0 0 > (20771) dim 3 offset 30 constrained 0 0 > (20772) dim 3 offset 36 constrained 0 0 > (20773) dim 3 offset 42 constrained 0 0 > The fact that this does not SEGV is just luck. Now, I did not put in any guard against this because I was not sure what should happen. We could error if a local index is repeated, or we could ignore it. This seems unsafe if you try to constrain it with two different values, but there is no way for me to tell if the values are compatible. Thus I just believed whatever the user told me. What is the intention here? It would be straightforward to ignore duplicates I guess. Thanks, Matt > Thanks, > Max > > On 6 Mar 2017, at 14:43, Matthew Knepley wrote: > > On Mon, Mar 6, 2017 at 8:38 AM, Maximilian Hartig com> wrote: > >> Of course, please find the source as well as the mesh attached below. I >> run with: >> >> -def_petscspace_order 2 -vel_petscspace_order 2 -snes_monitor >> -snes_converged_reason -ksp_converged_reason -ksp_monitor _true_residual >> -ksp_type fgmres -pc_type sor >> > > This sounds like over-constraining a point to me. I will try and run it > soon, but I have a full schedule this week. The easiest > way to see if this is happening should be to print out the Section that > gets made > > -dm_petscsection_view > > Thanks, > > Matt > > >> Thanks, >> Max >> >> >> >> >> On 4 Mar 2017, at 11:34, Sander Arens wrote: >> >> Hmm, strange you also get the error in serial. Can you maybe send a >> minimal working which demonstrates the error? >> >> Thanks, >> Sander >> >> On 3 March 2017 at 23:07, Maximilian Hartig >> wrote: >> >>> Yes Sander, your assessment is correct. I use DMPlex and specify the BC >>> using DMLabel. I do however get this error also when running in serial. >>> >>> Thanks, >>> Max >>> >>> On 3 Mar 2017, at 22:14, Matthew Knepley wrote: >>> >>> On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens >>> wrote: >>> >>>> Max, >>>> >>>> I'm assuming you use DMPlex for your mesh? If so, did you only specify >>>> the faces in the DMLabel (and not vertices or edges). Do you get this error >>>> only in parallel? >>>> >>>> If so, I can confirm this bug. I submitted a pull request for this >>>> yesterday. >>>> >>> >>> Yep, I saw Sander's pull request. I will get in merged in tomorrow when >>> I get home to Houston. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> On 3 March 2017 at 18:43, Lukas van de Wiel >>> com> wrote: >>>> >>>>> You have apparently preallocated the non-zeroes of you matrix, and the >>>>> room was insufficient to accommodate all your equations. >>>>> >>>>> What happened after you tried: >>>>> >>>>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>>>> >>>>> >>>>> Cheers >>>>> Lukas >>>>> >>>>> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig < >>>>> imilian.hartig at gmail.com> wrote: >>>>> >>>>>> Hello, >>>>>> >>>>>> I am working on a transient structural FEM code with PETSc. I managed >>>>>> to create a slow but functioning program with the use of petscFE and a TS >>>>>> solver. The code runs fine until I try to restrict movement in all three >>>>>> spatial directions for one face. I then get the error which is attached >>>>>> below. >>>>>> So apparently DMPlexMatSetClosure tries to write/read beyond what was >>>>>> priorly allocated. I do however not call MatSeqAIJSetPreallocation myself >>>>>> in the code. So I?m unsure where to start looking for the bug. In my >>>>>> understanding, PETSc should know from the DM how much space to allocate. >>>>>> Could you kindly give me a hint? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Max >>>>>> >>>>>> 0 SNES Function norm 2.508668036663e-06 >>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>> -------------------------------------------------------------- >>>>>> [0]PETSC ERROR: Argument out of range >>>>>> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >>>>>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to >>>>>> turn off this check >>>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/pet >>>>>> sc/documentation/faq.html for trouble shooting. >>>>>> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc >>>>>> GIT Date: 2017-02-28 13:41:43 -0600 >>>>>> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by >>>>>> hartig Fri Mar 3 17:55:57 2017 >>>>>> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel >>>>>> --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc >>>>>> --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc >>>>>> --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort >>>>>> --download-ml >>>>>> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in >>>>>> /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >>>>>> [0]PETSC ERROR: #2 MatSetValues() line 1270 in >>>>>> /home/hartig/petsc/src/mat/interface/matrix.c >>>>>> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >>>>>> [0]mat for sieve point 60 >>>>>> [0]mat row indices[0] = 41754 >>>>>> [0]mat row indices[1] = 41755 >>>>>> [0]mat row indices[2] = 41756 >>>>>> [0]mat row indices[3] = 41760 >>>>>> [0]mat row indices[4] = 41761 >>>>>> [0]mat row indices[5] = 41762 >>>>>> [0]mat row indices[6] = 41766 >>>>>> [0]mat row indices[7] = -41768 >>>>>> [0]mat row indices[8] = 41767 >>>>>> [0]mat row indices[9] = 41771 >>>>>> [0]mat row indices[10] = -41773 >>>>>> [0]mat row indices[11] = 41772 >>>>>> [0]mat row indices[12] = 41776 >>>>>> [0]mat row indices[13] = 41777 >>>>>> [0]mat row indices[14] = 41778 >>>>>> [0]mat row indices[15] = 41782 >>>>>> [0]mat row indices[16] = -41784 >>>>>> [0]mat row indices[17] = 41783 >>>>>> [0]mat row indices[18] = 261 >>>>>> [0]mat row indices[19] = -263 >>>>>> [0]mat row indices[20] = 262 >>>>>> [0]mat row indices[21] = 24318 >>>>>> [0]mat row indices[22] = 24319 >>>>>> [0]mat row indices[23] = 24320 >>>>>> [0]mat row indices[24] = -7 >>>>>> [0]mat row indices[25] = -8 >>>>>> [0]mat row indices[26] = 6 >>>>>> [0]mat row indices[27] = 1630 >>>>>> [0]mat row indices[28] = -1632 >>>>>> [0]mat row indices[29] = 1631 >>>>>> [0]mat row indices[30] = 41757 >>>>>> [0]mat row indices[31] = 41758 >>>>>> [0]mat row indices[32] = 41759 >>>>>> [0]mat row indices[33] = 41763 >>>>>> [0]mat row indices[34] = 41764 >>>>>> [0]mat row indices[35] = 41765 >>>>>> [0]mat row indices[36] = 41768 >>>>>> [0]mat row indices[37] = 41769 >>>>>> [0]mat row indices[38] = 41770 >>>>>> [0]mat row indices[39] = 41773 >>>>>> [0]mat row indices[40] = 41774 >>>>>> [0]mat row indices[41] = 41775 >>>>>> [0]mat row indices[42] = 41779 >>>>>> [0]mat row indices[43] = 41780 >>>>>> [0]mat row indices[44] = 41781 >>>>>> [0]mat row indices[45] = 41784 >>>>>> [0]mat row indices[46] = 41785 >>>>>> [0]mat row indices[47] = 41786 >>>>>> [0]mat row indices[48] = 263 >>>>>> [0]mat row indices[49] = 264 >>>>>> [0]mat row indices[50] = 265 >>>>>> [0]mat row indices[51] = 24321 >>>>>> [0]mat row indices[52] = 24322 >>>>>> [0]mat row indices[53] = 24323 >>>>>> [0]mat row indices[54] = 5 >>>>>> [0]mat row indices[55] = 6 >>>>>> [0]mat row indices[56] = 7 >>>>>> [0]mat row indices[57] = 1632 >>>>>> [0]mat row indices[58] = 1633 >>>>>> [0]mat row indices[59] = 1634 >>>>>> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 >>>>>> 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 >>>>>> 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 >>>>>> -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 >>>>>> -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 >>>>>> -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 >>>>>> -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 >>>>>> 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 >>>>>> -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>>>> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 >>>>>> -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 >>>>>> 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 >>>>>> -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 >>>>>> -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >>>>>> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 >>>>>> -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 >>>>>> -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 >>>>>> -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 >>>>>> 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 >>>>>> 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 >>>>>> -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 >>>>>> -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 >>>>>> -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 >>>>>> -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 >>>>>> 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 >>>>>> -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 >>>>>> -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 >>>>>> 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>>>> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 >>>>>> 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 >>>>>> 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 >>>>>> 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 >>>>>> -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 >>>>>> 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 >>>>>> -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 >>>>>> 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 >>>>>> 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 >>>>>> 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 >>>>>> 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 >>>>>> 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 >>>>>> -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. >>>>>> 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>>>> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 >>>>>> -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 >>>>>> 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 >>>>>> -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 >>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 >>>>>> 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 >>>>>> 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 >>>>>> 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 >>>>>> -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 >>>>>> -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 >>>>>> 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 >>>>>> -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 >>>>>> 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 >>>>>> -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 >>>>>> -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. >>>>>> 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>> -1.04322e-11 0. 0. >>>>>> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 >>>>>> -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 >>>>>> 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 >>>>>> 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. >>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 >>>>>> 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 >>>>>> -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. >>>>>> -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 >>>>>> 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. >>>>>> 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>> 0. -1.04322e-11 >>>>>> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 >>>>>> -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 >>>>>> -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 >>>>>> -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 >>>>>> -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. >>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 >>>>>> -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 >>>>>> 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 >>>>>> -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 >>>>>> -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>>>>> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 >>>>>> 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 >>>>>> -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 >>>>>> 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 >>>>>> -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >>>>>> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 >>>>>> 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 >>>>>> -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 >>>>>> 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 >>>>>> -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. >>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 >>>>>> 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 >>>>>> 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 >>>>>> -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 >>>>>> 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>> 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>>>>> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 >>>>>> 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 >>>>>> -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 >>>>>> 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 >>>>>> -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>> 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>>>> 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. >>>>>> 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 >>>>>> 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. >>>>>> 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>> 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>> 2.60805e-12 0. 0. >>>>>> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 >>>>>> 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 >>>>>> 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 >>>>>> 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>> -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>> 2.60805e-12 0. >>>>>> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 >>>>>> 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 >>>>>> 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 >>>>>> 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 >>>>>> 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 >>>>>> 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 >>>>>> 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 >>>>>> 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 >>>>>> -0.0239537 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 >>>>>> 0.038569 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 >>>>>> -0.045728 -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 >>>>>> 0.0141688 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>> 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >>>>>> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 >>>>>> -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 >>>>>> 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 >>>>>> -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 >>>>>> 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >>>>>> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 >>>>>> 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 >>>>>> 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 >>>>>> -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. >>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. >>>>>> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 >>>>>> 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 >>>>>> -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 >>>>>> -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 >>>>>> 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >>>>>> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 >>>>>> -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. >>>>>> -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 >>>>>> 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 >>>>>> 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>> 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>> 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >>>>>> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -9.96708e-10 0. 0. >>>>>> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -9.96708e-10 0. >>>>>> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -9.96708e-10 >>>>>> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -9.96708e-10 0. 0. >>>>>> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -9.96708e-10 0. >>>>>> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -9.96708e-10 >>>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -9.96708e-10 0. 0. >>>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -9.96708e-10 0. >>>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -9.96708e-10 >>>>>> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -6.64472e-10 0. 0. >>>>>> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -6.64472e-10 0. >>>>>> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -6.64472e-10 >>>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -6.64472e-10 0. 0. >>>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -6.64472e-10 0. >>>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -6.64472e-10 >>>>>> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -6.64472e-10 0. 0. >>>>>> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -6.64472e-10 0. >>>>>> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -6.64472e-10 >>>>>> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 >>>>>> 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 >>>>>> 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>>>>> 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >>>>>> 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>>>>> 0. >>>>>> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>> 1.66118e-10 0. >>>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>> 1.66118e-10 >>>>>> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 >>>>>> 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 >>>>>> 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >>>>>> 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>>>>> 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>>>>> 0. >>>>>> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>> 1.66118e-10 0. >>>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>> 1.66118e-10 >>>>>> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 >>>>>> 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 >>>>>> 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>>>>> 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>>>>> 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. >>>>>> 0. >>>>>> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>> 1.66118e-10 0. >>>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>> 1.66118e-10 >>>>>> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 >>>>>> 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 >>>>>> 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >>>>>> 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >>>>>> 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. >>>>>> 0. >>>>>> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>> 9.96708e-10 0. >>>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>> 9.96708e-10 >>>>>> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in >>>>>> /home/hartig/petsc/src/dm/impls/plex/plex.c >>>>>> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in >>>>>> /home/hartig/petsc/src/snes/utils/dmplexsnes.c >>>>>> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in >>>>>> /home/hartig/petsc/src/ts/utils/dmplexts.c >>>>>> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in >>>>>> /home/hartig/petsc/src/ts/utils/dmlocalts.c >>>>>> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in >>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in >>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in >>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in >>>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>>> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in >>>>>> /home/hartig/petsc/src/snes/impls/ls/ls.c >>>>>> [0]PETSC ERROR: #12 SNESSolve() line 3967 in >>>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>>> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in >>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in >>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>> [0]PETSC ERROR: #15 TSStep() line 3809 in >>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>> >>>>>> >>>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> >>> >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From imilian.hartig at gmail.com Tue Mar 7 11:11:49 2017 From: imilian.hartig at gmail.com (Maximilian Hartig) Date: Tue, 7 Mar 2017 18:11:49 +0100 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> <69E95D27-72CD-4BFA-AC62-907727998DCB@gmail.com> Message-ID: <3383F285-34C3-48A2-BF7C-56ED80FDF236@gmail.com> > On 7 Mar 2017, at 16:29, Matthew Knepley wrote: > > On Tue, Mar 7, 2017 at 3:28 AM, Maximilian Hartig > wrote: > It seems you are correct. In theory, the problem should not be over constrained. It is 1/4 of a simple hollow cylinder geometry with rotational symmetry around the z-axis. I restrict movement completely on the upper and lower (z) end as well as movement in x- and y- direction respectively on the symmetry planes. > I am not completely sure what I am looking at with the output of -dm_petscsection_view. But these lines struck me as odd: > > > (5167) dim 3 offset 0 constrained 0 1 1 2 > (5168) dim 3 offset 6 constrained 0 1 1 2 > . > . > . > (5262) dim 3 offset 0 constrained 0 0 1 2 > (5263) dim 3 offset 6 constrained 0 0 1 2 > > > It seems that vertices that are part of the closures of both Face Sets get restricted twice in their respective degree of freedom. > > Yes, that is exactly what happens. > > This does however also happen when restricting movement in x- direction only for upper and lower faces. In that case without the solver producing an error: > (20770) dim 3 offset 24 constrained 0 0 > (20771) dim 3 offset 30 constrained 0 0 > (20772) dim 3 offset 36 constrained 0 0 > (20773) dim 3 offset 42 constrained 0 0 > > The fact that this does not SEGV is just luck. > > Now, I did not put in any guard against this because I was not sure what should happen. We could error if a local index is repeated, or > we could ignore it. This seems unsafe if you try to constrain it with two different values, but there is no way for me to tell if the values are > compatible. Thus I just believed whatever the user told me. > > What is the intention here? It would be straightforward to ignore duplicates I guess. Yes, ignoring duplicates would solve the problem then. I can think of no example where imposing two different Dirichlet BC on the same DOF of the same vertex would make sense (I might be wrong of course). That means the only issue is to determine wether the first or the second BC is the correct one to be imposed. I don?t know how I could filter out the vertices in question from the Label. I use GMSH to construct my meshes and could create a label for the edges without too much effort. But I cannot see an easy way to exclude them when imposing the BC. I tried to figure out where PETSC actually imposes the BC but got lost a bit in the source. Could you kindly point me towards the location? Thanks, Max > > Thanks, > > Matt > > Thanks, > Max > >> On 6 Mar 2017, at 14:43, Matthew Knepley > wrote: >> >> On Mon, Mar 6, 2017 at 8:38 AM, Maximilian Hartig > wrote: >> Of course, please find the source as well as the mesh attached below. I run with: >> >> -def_petscspace_order 2 -vel_petscspace_order 2 -snes_monitor -snes_converged_reason -ksp_converged_reason -ksp_monitor _true_residual -ksp_type fgmres -pc_type sor >> >> This sounds like over-constraining a point to me. I will try and run it soon, but I have a full schedule this week. The easiest >> way to see if this is happening should be to print out the Section that gets made >> >> -dm_petscsection_view >> >> Thanks, >> >> Matt >> >> Thanks, >> Max >> >> >> >> >>> On 4 Mar 2017, at 11:34, Sander Arens > wrote: >>> >>> Hmm, strange you also get the error in serial. Can you maybe send a minimal working which demonstrates the error? >>> >>> Thanks, >>> Sander >>> >>> On 3 March 2017 at 23:07, Maximilian Hartig > wrote: >>> Yes Sander, your assessment is correct. I use DMPlex and specify the BC using DMLabel. I do however get this error also when running in serial. >>> >>> Thanks, >>> Max >>> >>>> On 3 Mar 2017, at 22:14, Matthew Knepley > wrote: >>>> >>>> On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens > wrote: >>>> Max, >>>> >>>> I'm assuming you use DMPlex for your mesh? If so, did you only specify the faces in the DMLabel (and not vertices or edges). Do you get this error only in parallel? >>>> >>>> If so, I can confirm this bug. I submitted a pull request for this yesterday. >>>> >>>> Yep, I saw Sander's pull request. I will get in merged in tomorrow when I get home to Houston. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> On 3 March 2017 at 18:43, Lukas van de Wiel > wrote: >>>> You have apparently preallocated the non-zeroes of you matrix, and the room was insufficient to accommodate all your equations. >>>> >>>> What happened after you tried: >>>> >>>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>>> >>>> >>>> Cheers >>>> Lukas >>>> >>>> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig > wrote: >>>> Hello, >>>> >>>> I am working on a transient structural FEM code with PETSc. I managed to create a slow but functioning program with the use of petscFE and a TS solver. The code runs fine until I try to restrict movement in all three spatial directions for one face. I then get the error which is attached below. >>>> So apparently DMPlexMatSetClosure tries to write/read beyond what was priorly allocated. I do however not call MatSeqAIJSetPreallocation myself in the code. So I?m unsure where to start looking for the bug. In my understanding, PETSc should know from the DM how much space to allocate. >>>> Could you kindly give me a hint? >>>> >>>> Thanks, >>>> >>>> Max >>>> >>>> 0 SNES Function norm 2.508668036663e-06 >>>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>>> [0]PETSC ERROR: Argument out of range >>>> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >>>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn off this check >>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>>> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc GIT Date: 2017-02-28 13:41:43 -0600 >>>> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by hartig Fri Mar 3 17:55:57 2017 >>>> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort --download-ml >>>> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >>>> [0]PETSC ERROR: #2 MatSetValues() line 1270 in /home/hartig/petsc/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >>>> [0]mat for sieve point 60 >>>> [0]mat row indices[0] = 41754 >>>> [0]mat row indices[1] = 41755 >>>> [0]mat row indices[2] = 41756 >>>> [0]mat row indices[3] = 41760 >>>> [0]mat row indices[4] = 41761 >>>> [0]mat row indices[5] = 41762 >>>> [0]mat row indices[6] = 41766 >>>> [0]mat row indices[7] = -41768 >>>> [0]mat row indices[8] = 41767 >>>> [0]mat row indices[9] = 41771 >>>> [0]mat row indices[10] = -41773 >>>> [0]mat row indices[11] = 41772 >>>> [0]mat row indices[12] = 41776 >>>> [0]mat row indices[13] = 41777 >>>> [0]mat row indices[14] = 41778 >>>> [0]mat row indices[15] = 41782 >>>> [0]mat row indices[16] = -41784 >>>> [0]mat row indices[17] = 41783 >>>> [0]mat row indices[18] = 261 >>>> [0]mat row indices[19] = -263 >>>> [0]mat row indices[20] = 262 >>>> [0]mat row indices[21] = 24318 >>>> [0]mat row indices[22] = 24319 >>>> [0]mat row indices[23] = 24320 >>>> [0]mat row indices[24] = -7 >>>> [0]mat row indices[25] = -8 >>>> [0]mat row indices[26] = 6 >>>> [0]mat row indices[27] = 1630 >>>> [0]mat row indices[28] = -1632 >>>> [0]mat row indices[29] = 1631 >>>> [0]mat row indices[30] = 41757 >>>> [0]mat row indices[31] = 41758 >>>> [0]mat row indices[32] = 41759 >>>> [0]mat row indices[33] = 41763 >>>> [0]mat row indices[34] = 41764 >>>> [0]mat row indices[35] = 41765 >>>> [0]mat row indices[36] = 41768 >>>> [0]mat row indices[37] = 41769 >>>> [0]mat row indices[38] = 41770 >>>> [0]mat row indices[39] = 41773 >>>> [0]mat row indices[40] = 41774 >>>> [0]mat row indices[41] = 41775 >>>> [0]mat row indices[42] = 41779 >>>> [0]mat row indices[43] = 41780 >>>> [0]mat row indices[44] = 41781 >>>> [0]mat row indices[45] = 41784 >>>> [0]mat row indices[46] = 41785 >>>> [0]mat row indices[47] = 41786 >>>> [0]mat row indices[48] = 263 >>>> [0]mat row indices[49] = 264 >>>> [0]mat row indices[50] = 265 >>>> [0]mat row indices[51] = 24321 >>>> [0]mat row indices[52] = 24322 >>>> [0]mat row indices[53] = 24323 >>>> [0]mat row indices[54] = 5 >>>> [0]mat row indices[55] = 6 >>>> [0]mat row indices[56] = 7 >>>> [0]mat row indices[57] = 1632 >>>> [0]mat row indices[58] = 1633 >>>> [0]mat row indices[59] = 1634 >>>> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >>>> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>>> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >>>> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>>> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>>> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 -0.0239537 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 0.038569 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 -0.045728 -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 0.0141688 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >>>> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >>>> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. >>>> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >>>> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >>>> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. >>>> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >>>> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >>>> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>>> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >>>> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>>> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. >>>> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >>>> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>>> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 >>>> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>>> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 >>>> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. >>>> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 >>>> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. >>>> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 >>>> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in /home/hartig/petsc/src/dm/impls/plex/plex.c >>>> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in /home/hartig/petsc/src/snes/utils/dmplexsnes.c >>>> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in /home/hartig/petsc/src/ts/utils/dmplexts.c >>>> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in /home/hartig/petsc/src/ts/utils/dmlocalts.c >>>> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in /home/hartig/petsc/src/ts/interface/ts.c >>>> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in /home/hartig/petsc/src/ts/interface/ts.c >>>> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in /home/hartig/petsc/src/snes/interface/snes.c >>>> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in /home/hartig/petsc/src/snes/impls/ls/ls.c >>>> [0]PETSC ERROR: #12 SNESSolve() line 3967 in /home/hartig/petsc/src/snes/interface/snes.c >>>> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>> [0]PETSC ERROR: #15 TSStep() line 3809 in /home/hartig/petsc/src/ts/interface/ts.c >>>> >>>> >>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>> -- Norbert Wiener >>> >>> >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Mar 7 11:21:18 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 7 Mar 2017 11:21:18 -0600 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: <3383F285-34C3-48A2-BF7C-56ED80FDF236@gmail.com> References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> <69E95D27-72CD-4BFA-AC62-907727998DCB@gmail.com> <3383F285-34C3-48A2-BF7C-56ED80FDF236@gmail.com> Message-ID: On Tue, Mar 7, 2017 at 11:11 AM, Maximilian Hartig wrote: > > On 7 Mar 2017, at 16:29, Matthew Knepley wrote: > > On Tue, Mar 7, 2017 at 3:28 AM, Maximilian Hartig com> wrote: > >> It seems you are correct. In theory, the problem should not be over >> constrained. It is 1/4 of a simple hollow cylinder geometry with rotational >> symmetry around the z-axis. I restrict movement completely on the upper and >> lower (z) end as well as movement in x- and y- direction respectively on >> the symmetry planes. >> I am not completely sure what I am looking at with the output of >> -dm_petscsection_view. But these lines struck me as odd: >> >> >> (5167) dim 3 offset 0 constrained 0 1 1 2 >> (5168) dim 3 offset 6 constrained 0 1 1 2 >> . >> . >> . >> (5262) dim 3 offset 0 constrained 0 0 1 2 >> (5263) dim 3 offset 6 constrained 0 0 1 2 >> >> >> It seems that vertices that are part of the closures of both Face Sets >> get restricted twice in their respective degree of freedom. >> > > Yes, that is exactly what happens. > > >> This does however also happen when restricting movement in x- direction >> only for upper and lower faces. In that case without the solver producing >> an error: >> (20770) dim 3 offset 24 constrained 0 0 >> (20771) dim 3 offset 30 constrained 0 0 >> (20772) dim 3 offset 36 constrained 0 0 >> (20773) dim 3 offset 42 constrained 0 0 >> > > The fact that this does not SEGV is just luck. > > Now, I did not put in any guard against this because I was not sure what > should happen. We could error if a local index is repeated, or > we could ignore it. This seems unsafe if you try to constrain it with two > different values, but there is no way for me to tell if the values are > compatible. Thus I just believed whatever the user told me. > > > What is the intention here? It would be straightforward to ignore > duplicates I guess. > > Yes, ignoring duplicates would solve the problem then. I can think of no > example where imposing two different Dirichlet BC on the same DOF of the > same vertex would make sense (I might be wrong of course). That means the > only issue is to determine wether the first or the second BC is the correct > one to be imposed. > I don?t know how I could filter out the vertices in question from the > Label. I use GMSH to construct my meshes and could create a label for the > edges without too much effort. But I cannot see an easy way to exclude them > when imposing the BC. > I tried to figure out where PETSC actually imposes the BC but got lost a > bit in the source. Could you kindly point me towards the location? > It is in stages. 1) You make a structure with AddBoundary() that has a Label and function for boundary values 2) The PetscSection gets created with stores which points have constraints and which components they affect 3) When global Vecs are made, these constraints are left out 4) When local Vecs are made, they are left in 5) DMPlexInsertBoundaryValues() is called on local Vecs, and puts in the values from your functions. This usually happens when you copy the solutions values from the global Vec to a local Vec to being assembly. Thanks, Matt > Thanks, > Max > > > > Thanks, > > Matt > > >> Thanks, >> Max >> >> On 6 Mar 2017, at 14:43, Matthew Knepley wrote: >> >> On Mon, Mar 6, 2017 at 8:38 AM, Maximilian Hartig > om> wrote: >> >>> Of course, please find the source as well as the mesh attached below. I >>> run with: >>> >>> -def_petscspace_order 2 -vel_petscspace_order 2 -snes_monitor >>> -snes_converged_reason -ksp_converged_reason -ksp_monitor _true_residual >>> -ksp_type fgmres -pc_type sor >>> >> >> This sounds like over-constraining a point to me. I will try and run it >> soon, but I have a full schedule this week. The easiest >> way to see if this is happening should be to print out the Section that >> gets made >> >> -dm_petscsection_view >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Max >>> >>> >>> >>> >>> On 4 Mar 2017, at 11:34, Sander Arens wrote: >>> >>> Hmm, strange you also get the error in serial. Can you maybe send a >>> minimal working which demonstrates the error? >>> >>> Thanks, >>> Sander >>> >>> On 3 March 2017 at 23:07, Maximilian Hartig >>> wrote: >>> >>>> Yes Sander, your assessment is correct. I use DMPlex and specify the BC >>>> using DMLabel. I do however get this error also when running in serial. >>>> >>>> Thanks, >>>> Max >>>> >>>> On 3 Mar 2017, at 22:14, Matthew Knepley wrote: >>>> >>>> On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens >>>> wrote: >>>> >>>>> Max, >>>>> >>>>> I'm assuming you use DMPlex for your mesh? If so, did you only specify >>>>> the faces in the DMLabel (and not vertices or edges). Do you get this error >>>>> only in parallel? >>>>> >>>>> If so, I can confirm this bug. I submitted a pull request for this >>>>> yesterday. >>>>> >>>> >>>> Yep, I saw Sander's pull request. I will get in merged in tomorrow when >>>> I get home to Houston. >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> On 3 March 2017 at 18:43, Lukas van de Wiel >>>> com> wrote: >>>>> >>>>>> You have apparently preallocated the non-zeroes of you matrix, and >>>>>> the room was insufficient to accommodate all your equations. >>>>>> >>>>>> What happened after you tried: >>>>>> >>>>>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>>>>> >>>>>> >>>>>> Cheers >>>>>> Lukas >>>>>> >>>>>> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig < >>>>>> imilian.hartig at gmail.com> wrote: >>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> I am working on a transient structural FEM code with PETSc. I >>>>>>> managed to create a slow but functioning program with the use of petscFE >>>>>>> and a TS solver. The code runs fine until I try to restrict movement in all >>>>>>> three spatial directions for one face. I then get the error which is >>>>>>> attached below. >>>>>>> So apparently DMPlexMatSetClosure tries to write/read beyond what >>>>>>> was priorly allocated. I do however not call MatSeqAIJSetPreallocation >>>>>>> myself in the code. So I?m unsure where to start looking for the bug. In my >>>>>>> understanding, PETSc should know from the DM how much space to allocate. >>>>>>> Could you kindly give me a hint? >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Max >>>>>>> >>>>>>> 0 SNES Function norm 2.508668036663e-06 >>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>> -------------------------------------------------------------- >>>>>>> [0]PETSC ERROR: Argument out of range >>>>>>> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >>>>>>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to >>>>>>> turn off this check >>>>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/pet >>>>>>> sc/documentation/faq.html for trouble shooting. >>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>> v3.7.5-3223-g99077fc GIT Date: 2017-02-28 13:41:43 -0600 >>>>>>> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by >>>>>>> hartig Fri Mar 3 17:55:57 2017 >>>>>>> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel >>>>>>> --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc >>>>>>> --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc >>>>>>> --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort >>>>>>> --download-ml >>>>>>> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in >>>>>>> /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >>>>>>> [0]PETSC ERROR: #2 MatSetValues() line 1270 in >>>>>>> /home/hartig/petsc/src/mat/interface/matrix.c >>>>>>> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >>>>>>> [0]mat for sieve point 60 >>>>>>> [0]mat row indices[0] = 41754 >>>>>>> [0]mat row indices[1] = 41755 >>>>>>> [0]mat row indices[2] = 41756 >>>>>>> [0]mat row indices[3] = 41760 >>>>>>> [0]mat row indices[4] = 41761 >>>>>>> [0]mat row indices[5] = 41762 >>>>>>> [0]mat row indices[6] = 41766 >>>>>>> [0]mat row indices[7] = -41768 >>>>>>> [0]mat row indices[8] = 41767 >>>>>>> [0]mat row indices[9] = 41771 >>>>>>> [0]mat row indices[10] = -41773 >>>>>>> [0]mat row indices[11] = 41772 >>>>>>> [0]mat row indices[12] = 41776 >>>>>>> [0]mat row indices[13] = 41777 >>>>>>> [0]mat row indices[14] = 41778 >>>>>>> [0]mat row indices[15] = 41782 >>>>>>> [0]mat row indices[16] = -41784 >>>>>>> [0]mat row indices[17] = 41783 >>>>>>> [0]mat row indices[18] = 261 >>>>>>> [0]mat row indices[19] = -263 >>>>>>> [0]mat row indices[20] = 262 >>>>>>> [0]mat row indices[21] = 24318 >>>>>>> [0]mat row indices[22] = 24319 >>>>>>> [0]mat row indices[23] = 24320 >>>>>>> [0]mat row indices[24] = -7 >>>>>>> [0]mat row indices[25] = -8 >>>>>>> [0]mat row indices[26] = 6 >>>>>>> [0]mat row indices[27] = 1630 >>>>>>> [0]mat row indices[28] = -1632 >>>>>>> [0]mat row indices[29] = 1631 >>>>>>> [0]mat row indices[30] = 41757 >>>>>>> [0]mat row indices[31] = 41758 >>>>>>> [0]mat row indices[32] = 41759 >>>>>>> [0]mat row indices[33] = 41763 >>>>>>> [0]mat row indices[34] = 41764 >>>>>>> [0]mat row indices[35] = 41765 >>>>>>> [0]mat row indices[36] = 41768 >>>>>>> [0]mat row indices[37] = 41769 >>>>>>> [0]mat row indices[38] = 41770 >>>>>>> [0]mat row indices[39] = 41773 >>>>>>> [0]mat row indices[40] = 41774 >>>>>>> [0]mat row indices[41] = 41775 >>>>>>> [0]mat row indices[42] = 41779 >>>>>>> [0]mat row indices[43] = 41780 >>>>>>> [0]mat row indices[44] = 41781 >>>>>>> [0]mat row indices[45] = 41784 >>>>>>> [0]mat row indices[46] = 41785 >>>>>>> [0]mat row indices[47] = 41786 >>>>>>> [0]mat row indices[48] = 263 >>>>>>> [0]mat row indices[49] = 264 >>>>>>> [0]mat row indices[50] = 265 >>>>>>> [0]mat row indices[51] = 24321 >>>>>>> [0]mat row indices[52] = 24322 >>>>>>> [0]mat row indices[53] = 24323 >>>>>>> [0]mat row indices[54] = 5 >>>>>>> [0]mat row indices[55] = 6 >>>>>>> [0]mat row indices[56] = 7 >>>>>>> [0]mat row indices[57] = 1632 >>>>>>> [0]mat row indices[58] = 1633 >>>>>>> [0]mat row indices[59] = 1634 >>>>>>> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 >>>>>>> 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 >>>>>>> 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 >>>>>>> -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 >>>>>>> -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>>> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 >>>>>>> -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 >>>>>>> -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 >>>>>>> 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 >>>>>>> -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>>>>> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 >>>>>>> -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 >>>>>>> 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 >>>>>>> -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 >>>>>>> -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >>>>>>> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 >>>>>>> -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 >>>>>>> -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 >>>>>>> -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 >>>>>>> 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 >>>>>>> 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 >>>>>>> -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 >>>>>>> -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 >>>>>>> -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 >>>>>>> -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 >>>>>>> 0.165135 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 >>>>>>> -0.0418706 -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 >>>>>>> -0.0117288 -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 >>>>>>> -0.0674185 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. >>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.56483e-11 >>>>>>> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 >>>>>>> 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 >>>>>>> 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 >>>>>>> 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 >>>>>>> -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 >>>>>>> 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 >>>>>>> -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 >>>>>>> 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 >>>>>>> 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 >>>>>>> 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 >>>>>>> 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 >>>>>>> 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 >>>>>>> -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. >>>>>>> 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>>>>> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 >>>>>>> -2.37741 -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 >>>>>>> -0.107528 0.11009 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 >>>>>>> 0.158254 0.00657075 -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 >>>>>>> -0.0983732 0.0825675 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 >>>>>>> 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 >>>>>>> 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 >>>>>>> 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 >>>>>>> -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 >>>>>>> -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 >>>>>>> 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 >>>>>>> -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 >>>>>>> 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 >>>>>>> -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 >>>>>>> -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. >>>>>>> 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. >>>>>>> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 >>>>>>> -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 >>>>>>> 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 >>>>>>> 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. >>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 >>>>>>> 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 >>>>>>> -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. >>>>>>> -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 >>>>>>> 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. >>>>>>> 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>> 0. -1.04322e-11 >>>>>>> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 >>>>>>> -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 >>>>>>> -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 >>>>>>> -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 >>>>>>> -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. >>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 >>>>>>> -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 >>>>>>> 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 >>>>>>> -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 >>>>>>> -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>>>>>> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 >>>>>>> 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 >>>>>>> -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 >>>>>>> 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 >>>>>>> -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >>>>>>> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 >>>>>>> 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 >>>>>>> -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 >>>>>>> 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 >>>>>>> -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. >>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 >>>>>>> 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 >>>>>>> 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 >>>>>>> -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 >>>>>>> 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>>> 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>>>>>> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 >>>>>>> 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 >>>>>>> -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 >>>>>>> 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 >>>>>>> -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>> 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>>>>> 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>>> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. >>>>>>> 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 >>>>>>> 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. >>>>>>> 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>> 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>> 2.60805e-12 0. 0. >>>>>>> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 >>>>>>> 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 >>>>>>> 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 >>>>>>> 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>> 2.60805e-12 0. >>>>>>> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 >>>>>>> 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 >>>>>>> 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 >>>>>>> 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>>> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 >>>>>>> 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 >>>>>>> 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 >>>>>>> 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 >>>>>>> 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 >>>>>>> -0.0239537 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 >>>>>>> 0.038569 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 >>>>>>> -0.045728 -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 >>>>>>> 0.0141688 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>> 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >>>>>>> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 >>>>>>> -0.0674185 -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 >>>>>>> 0.0412838 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 >>>>>>> -0.00531859 -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 >>>>>>> 0.0412838 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >>>>>>> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 >>>>>>> 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 >>>>>>> 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 >>>>>>> -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. >>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. >>>>>>> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 >>>>>>> 0.000426133 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 >>>>>>> 0.115568 -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 >>>>>>> -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 >>>>>>> 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >>>>>>> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 >>>>>>> -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. >>>>>>> -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 >>>>>>> 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 >>>>>>> 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>> 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>>> 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >>>>>>> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. >>>>>>> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -9.96708e-10 0. >>>>>>> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -9.96708e-10 >>>>>>> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. >>>>>>> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 0. >>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 >>>>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. >>>>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 0. >>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 >>>>>>> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. >>>>>>> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 0. >>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 >>>>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. >>>>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 0. >>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 >>>>>>> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. >>>>>>> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -6.64472e-10 0. >>>>>>> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -6.64472e-10 >>>>>>> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>> 1.66118e-10 0. 0. >>>>>>> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>> 1.66118e-10 0. >>>>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>> 1.66118e-10 >>>>>>> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>> 1.66118e-10 0. 0. >>>>>>> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>> 1.66118e-10 0. >>>>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>> 1.66118e-10 >>>>>>> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>> 1.66118e-10 0. 0. >>>>>>> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>> 1.66118e-10 0. >>>>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>> 1.66118e-10 >>>>>>> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>> 9.96708e-10 0. 0. >>>>>>> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>> 9.96708e-10 0. >>>>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>> 9.96708e-10 >>>>>>> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in >>>>>>> /home/hartig/petsc/src/dm/impls/plex/plex.c >>>>>>> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in >>>>>>> /home/hartig/petsc/src/snes/utils/dmplexsnes.c >>>>>>> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in >>>>>>> /home/hartig/petsc/src/ts/utils/dmplexts.c >>>>>>> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in >>>>>>> /home/hartig/petsc/src/ts/utils/dmlocalts.c >>>>>>> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in >>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in >>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in >>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in >>>>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>>>> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in >>>>>>> /home/hartig/petsc/src/snes/impls/ls/ls.c >>>>>>> [0]PETSC ERROR: #12 SNESSolve() line 3967 in >>>>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>>>> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in >>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in >>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>> [0]PETSC ERROR: #15 TSStep() line 3809 in >>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> >>>> >>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Mar 7 11:23:12 2017 From: hzhang at mcs.anl.gov (Hong) Date: Tue, 7 Mar 2017 11:23:12 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> Message-ID: I checked MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), they are virtually same. Why the version for BAIJ is so much slower? I'll investigate it. Fande, How large is your matrix? Is it possible to send us your matrix so I can test it? Hong On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith wrote: > > Thanks. Even the symbolic is slower for BAIJ. I don't like that, it > definitely should not be since it is (at least should be) doing a symbolic > factorization on a symbolic matrix 1/11th the size! > > Keep us informed. > > > > > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: > > > > Thanks, Barry, > > > > Log info: > > > > AIJ: > > > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 0.0e+00 > 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 > > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 0.0e+00 > 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 > > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > BAIJ: > > > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 0.0e+00 > 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 > > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 0.0e+00 > 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 > > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > It looks like both MatSolve and MatLUFactorNum are slower. > > > > I will try your suggestions. > > > > Fande > > > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith wrote: > > > > Note also that if the 11 by 11 blocks are actually sparse (and you > don't store all the zeros in the blocks in the AIJ format) then then AIJ > non-block factorization involves less floating point operations and less > memory access so can be faster than the BAIJ format, depending on "how > sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with > AIJ (with zeros maybe in certain locations) then the above is not true. > > > > > > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: > > > > > > > > > This is because for block size 11 it is using calls to LAPACK/BLAS > for the block operations instead of custom routines for that block size. > > > > > > Here is what you need to do. For a good sized case run both with > -log_view and check the time spent in > > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and > BAIJ. If they have a different number of function calls then divide by the > function call count to determine the time per function call. > > > > > > This will tell you which routine needs to be optimized first either > MatLUFactorNumeric or MatSolve. My guess is MatSolve. > > > > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the function > MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function > MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the > block size of 11. > > > > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 > it uses the new routine something like. > > > > > > if (both_identity) { > > > if (b->bs == 11) > > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > > > } else { > > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > > > } > > > > > > Rerun and look at the new -log_view. Send all three -log_view to use > at this point. If this optimization helps and now > > > MatLUFactorNumeric is the time sink you can do the process to > MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block > custom version. > > > > > > Barry > > > > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: > > >> > > >> > > >> > > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan < > patrick.sanan at gmail.com> wrote: > > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande > wrote: > > >>> Hi All, > > >>> > > >>> I am solving a nonlinear system whose Jacobian matrix has a block > structure. > > >>> More precisely, there is a mesh, and for each vertex there are 11 > variables > > >>> associated with it. I am using BAIJ. > > >>> > > >>> I thought block ILU(k) should be more efficient than the point-wise > ILU(k). > > >>> After some numerical experiments, I found that the block ILU(K) is > much > > >>> slower than the point-wise version. > > >> Do you mean that it takes more iterations to converge, or that the > > >> time per iteration is greater, or both? > > >> > > >> The number of iterations is very similar, but the timer per iteration > is greater. > > >> > > >> > > >>> > > >>> Any thoughts? > > >>> > > >>> Fande, > > >> > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fande.kong at inl.gov Tue Mar 7 12:01:24 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Tue, 7 Mar 2017 11:01:24 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> Message-ID: On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: > I checked > MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), > they are virtually same. Why the version for BAIJ is so much slower? > I'll investigate it. > > Fande, > How large is your matrix? Is it possible to send us your matrix so I can > test it? > Thanks, Hong, It is a 3020875x3020875 matrix, and it is large. I can make a small one if you like, but not sure it will reproduce this issue or not. Fande, > > Hong > > > On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith wrote: > >> >> Thanks. Even the symbolic is slower for BAIJ. I don't like that, it >> definitely should not be since it is (at least should be) doing a symbolic >> factorization on a symbolic matrix 1/11th the size! >> >> Keep us informed. >> >> >> >> > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: >> > >> > Thanks, Barry, >> > >> > Log info: >> > >> > AIJ: >> > >> > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 >> 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 >> > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 >> 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 >> > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> > >> > BAIJ: >> > >> > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 >> 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 >> > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 >> 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 >> > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> > >> > It looks like both MatSolve and MatLUFactorNum are slower. >> > >> > I will try your suggestions. >> > >> > Fande >> > >> > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith wrote: >> > >> > Note also that if the 11 by 11 blocks are actually sparse (and you >> don't store all the zeros in the blocks in the AIJ format) then then AIJ >> non-block factorization involves less floating point operations and less >> memory access so can be faster than the BAIJ format, depending on "how >> sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with >> AIJ (with zeros maybe in certain locations) then the above is not true. >> > >> > >> > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: >> > > >> > > >> > > This is because for block size 11 it is using calls to LAPACK/BLAS >> for the block operations instead of custom routines for that block size. >> > > >> > > Here is what you need to do. For a good sized case run both with >> -log_view and check the time spent in >> > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and >> BAIJ. If they have a different number of function calls then divide by the >> function call count to determine the time per function call. >> > > >> > > This will tell you which routine needs to be optimized first either >> MatLUFactorNumeric or MatSolve. My guess is MatSolve. >> > > >> > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the function >> MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function >> MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the >> block size of 11. >> > > >> > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 >> it uses the new routine something like. >> > > >> > > if (both_identity) { >> > > if (b->bs == 11) >> > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; >> > > } else { >> > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; >> > > } >> > > >> > > Rerun and look at the new -log_view. Send all three -log_view to >> use at this point. If this optimization helps and now >> > > MatLUFactorNumeric is the time sink you can do the process to >> MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block >> custom version. >> > > >> > > Barry >> > > >> > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: >> > >> >> > >> >> > >> >> > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan < >> patrick.sanan at gmail.com> wrote: >> > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande >> wrote: >> > >>> Hi All, >> > >>> >> > >>> I am solving a nonlinear system whose Jacobian matrix has a block >> structure. >> > >>> More precisely, there is a mesh, and for each vertex there are 11 >> variables >> > >>> associated with it. I am using BAIJ. >> > >>> >> > >>> I thought block ILU(k) should be more efficient than the point-wise >> ILU(k). >> > >>> After some numerical experiments, I found that the block ILU(K) is >> much >> > >>> slower than the point-wise version. >> > >> Do you mean that it takes more iterations to converge, or that the >> > >> time per iteration is greater, or both? >> > >> >> > >> The number of iterations is very similar, but the timer per >> iteration is greater. >> > >> >> > >> >> > >>> >> > >>> Any thoughts? >> > >>> >> > >>> Fande, >> > >> >> > > >> > >> > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Mar 7 12:15:24 2017 From: hzhang at mcs.anl.gov (Hong) Date: Tue, 7 Mar 2017 12:15:24 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> Message-ID: Fande : A small one, e.g., the size used by a sequential diagonal block for ilu preconditioner would work. Thanks, Hong > > > On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: > >> I checked >> MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), >> they are virtually same. Why the version for BAIJ is so much slower? >> I'll investigate it. >> > >> Fande, >> How large is your matrix? Is it possible to send us your matrix so I can >> test it? >> > > Thanks, Hong, > > It is a 3020875x3020875 matrix, and it is large. I can make a small one if > you like, but not sure it will reproduce this issue or not. > > Fande, > > > >> >> Hong >> >> >> On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith wrote: >> >>> >>> Thanks. Even the symbolic is slower for BAIJ. I don't like that, it >>> definitely should not be since it is (at least should be) doing a symbolic >>> factorization on a symbolic matrix 1/11th the size! >>> >>> Keep us informed. >>> >>> >>> >>> > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: >>> > >>> > Thanks, Barry, >>> > >>> > Log info: >>> > >>> > AIJ: >>> > >>> > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 >>> 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 >>> > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 >>> 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 >>> > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> > >>> > BAIJ: >>> > >>> > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 >>> 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 >>> > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 >>> 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 >>> > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> > >>> > It looks like both MatSolve and MatLUFactorNum are slower. >>> > >>> > I will try your suggestions. >>> > >>> > Fande >>> > >>> > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith >>> wrote: >>> > >>> > Note also that if the 11 by 11 blocks are actually sparse (and you >>> don't store all the zeros in the blocks in the AIJ format) then then AIJ >>> non-block factorization involves less floating point operations and less >>> memory access so can be faster than the BAIJ format, depending on "how >>> sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with >>> AIJ (with zeros maybe in certain locations) then the above is not true. >>> > >>> > >>> > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: >>> > > >>> > > >>> > > This is because for block size 11 it is using calls to LAPACK/BLAS >>> for the block operations instead of custom routines for that block size. >>> > > >>> > > Here is what you need to do. For a good sized case run both with >>> -log_view and check the time spent in >>> > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and >>> BAIJ. If they have a different number of function calls then divide by the >>> function call count to determine the time per function call. >>> > > >>> > > This will tell you which routine needs to be optimized first >>> either MatLUFactorNumeric or MatSolve. My guess is MatSolve. >>> > > >>> > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the >>> function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function >>> MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the >>> block size of 11. >>> > > >>> > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is >>> 11 it uses the new routine something like. >>> > > >>> > > if (both_identity) { >>> > > if (b->bs == 11) >>> > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; >>> > > } else { >>> > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; >>> > > } >>> > > >>> > > Rerun and look at the new -log_view. Send all three -log_view to >>> use at this point. If this optimization helps and now >>> > > MatLUFactorNumeric is the time sink you can do the process to >>> MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size >>> block custom version. >>> > > >>> > > Barry >>> > > >>> > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: >>> > >> >>> > >> >>> > >> >>> > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan < >>> patrick.sanan at gmail.com> wrote: >>> > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande >>> wrote: >>> > >>> Hi All, >>> > >>> >>> > >>> I am solving a nonlinear system whose Jacobian matrix has a block >>> structure. >>> > >>> More precisely, there is a mesh, and for each vertex there are 11 >>> variables >>> > >>> associated with it. I am using BAIJ. >>> > >>> >>> > >>> I thought block ILU(k) should be more efficient than the >>> point-wise ILU(k). >>> > >>> After some numerical experiments, I found that the block ILU(K) is >>> much >>> > >>> slower than the point-wise version. >>> > >> Do you mean that it takes more iterations to converge, or that the >>> > >> time per iteration is greater, or both? >>> > >> >>> > >> The number of iterations is very similar, but the timer per >>> iteration is greater. >>> > >> >>> > >> >>> > >>> >>> > >>> Any thoughts? >>> > >>> >>> > >>> Fande, >>> > >> >>> > > >>> > >>> > >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 7 13:29:27 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 7 Mar 2017 13:29:27 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> Message-ID: <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> It is too big for email you can post it somewhere so we can download it. > On Mar 7, 2017, at 12:01 PM, Kong, Fande wrote: > > > > On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: > I checked > MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), > they are virtually same. Why the version for BAIJ is so much slower? > I'll investigate it. > > Fande, > How large is your matrix? Is it possible to send us your matrix so I can test it? > > Thanks, Hong, > > It is a 3020875x3020875 matrix, and it is large. I can make a small one if you like, but not sure it will reproduce this issue or not. > > Fande, > > > > Hong > > > On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith wrote: > > Thanks. Even the symbolic is slower for BAIJ. I don't like that, it definitely should not be since it is (at least should be) doing a symbolic factorization on a symbolic matrix 1/11th the size! > > Keep us informed. > > > > > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: > > > > Thanks, Barry, > > > > Log info: > > > > AIJ: > > > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 > > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 > > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > BAIJ: > > > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 > > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 > > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > It looks like both MatSolve and MatLUFactorNum are slower. > > > > I will try your suggestions. > > > > Fande > > > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith wrote: > > > > Note also that if the 11 by 11 blocks are actually sparse (and you don't store all the zeros in the blocks in the AIJ format) then then AIJ non-block factorization involves less floating point operations and less memory access so can be faster than the BAIJ format, depending on "how sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with AIJ (with zeros maybe in certain locations) then the above is not true. > > > > > > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: > > > > > > > > > This is because for block size 11 it is using calls to LAPACK/BLAS for the block operations instead of custom routines for that block size. > > > > > > Here is what you need to do. For a good sized case run both with -log_view and check the time spent in > > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and BAIJ. If they have a different number of function calls then divide by the function call count to determine the time per function call. > > > > > > This will tell you which routine needs to be optimized first either MatLUFactorNumeric or MatSolve. My guess is MatSolve. > > > > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the block size of 11. > > > > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 it uses the new routine something like. > > > > > > if (both_identity) { > > > if (b->bs == 11) > > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > > > } else { > > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > > > } > > > > > > Rerun and look at the new -log_view. Send all three -log_view to use at this point. If this optimization helps and now > > > MatLUFactorNumeric is the time sink you can do the process to MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block custom version. > > > > > > Barry > > > > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: > > >> > > >> > > >> > > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan wrote: > > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande wrote: > > >>> Hi All, > > >>> > > >>> I am solving a nonlinear system whose Jacobian matrix has a block structure. > > >>> More precisely, there is a mesh, and for each vertex there are 11 variables > > >>> associated with it. I am using BAIJ. > > >>> > > >>> I thought block ILU(k) should be more efficient than the point-wise ILU(k). > > >>> After some numerical experiments, I found that the block ILU(K) is much > > >>> slower than the point-wise version. > > >> Do you mean that it takes more iterations to converge, or that the > > >> time per iteration is greater, or both? > > >> > > >> The number of iterations is very similar, but the timer per iteration is greater. > > >> > > >> > > >>> > > >>> Any thoughts? > > >>> > > >>> Fande, > > >> > > > > > > > > > > From fande.kong at inl.gov Tue Mar 7 14:26:39 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Tue, 7 Mar 2017 13:26:39 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> Message-ID: Uploaded to google drive, and sent you links in another email. Not sure if it works or not. Fande, On Tue, Mar 7, 2017 at 12:29 PM, Barry Smith wrote: > > It is too big for email you can post it somewhere so we can download it. > > > > On Mar 7, 2017, at 12:01 PM, Kong, Fande wrote: > > > > > > > > On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: > > I checked > > MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), > > they are virtually same. Why the version for BAIJ is so much slower? > > I'll investigate it. > > > > Fande, > > How large is your matrix? Is it possible to send us your matrix so I can > test it? > > > > Thanks, Hong, > > > > It is a 3020875x3020875 matrix, and it is large. I can make a small one > if you like, but not sure it will reproduce this issue or not. > > > > Fande, > > > > > > > > Hong > > > > > > On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith wrote: > > > > Thanks. Even the symbolic is slower for BAIJ. I don't like that, it > definitely should not be since it is (at least should be) doing a symbolic > factorization on a symbolic matrix 1/11th the size! > > > > Keep us informed. > > > > > > > > > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: > > > > > > Thanks, Barry, > > > > > > Log info: > > > > > > AIJ: > > > > > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 > 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 > > > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 > 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 > > > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 > 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > BAIJ: > > > > > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 > 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 > > > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 > 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 > > > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 > 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > It looks like both MatSolve and MatLUFactorNum are slower. > > > > > > I will try your suggestions. > > > > > > Fande > > > > > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith > wrote: > > > > > > Note also that if the 11 by 11 blocks are actually sparse (and you > don't store all the zeros in the blocks in the AIJ format) then then AIJ > non-block factorization involves less floating point operations and less > memory access so can be faster than the BAIJ format, depending on "how > sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with > AIJ (with zeros maybe in certain locations) then the above is not true. > > > > > > > > > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: > > > > > > > > > > > > This is because for block size 11 it is using calls to LAPACK/BLAS > for the block operations instead of custom routines for that block size. > > > > > > > > Here is what you need to do. For a good sized case run both with > -log_view and check the time spent in > > > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and > BAIJ. If they have a different number of function calls then divide by the > function call count to determine the time per function call. > > > > > > > > This will tell you which routine needs to be optimized first > either MatLUFactorNumeric or MatSolve. My guess is MatSolve. > > > > > > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the > function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function > MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the > block size of 11. > > > > > > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is > 11 it uses the new routine something like. > > > > > > > > if (both_identity) { > > > > if (b->bs == 11) > > > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > > > > } else { > > > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > > > > } > > > > > > > > Rerun and look at the new -log_view. Send all three -log_view to > use at this point. If this optimization helps and now > > > > MatLUFactorNumeric is the time sink you can do the process to > MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block > custom version. > > > > > > > > Barry > > > > > > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: > > > >> > > > >> > > > >> > > > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan < > patrick.sanan at gmail.com> wrote: > > > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande > wrote: > > > >>> Hi All, > > > >>> > > > >>> I am solving a nonlinear system whose Jacobian matrix has a block > structure. > > > >>> More precisely, there is a mesh, and for each vertex there are 11 > variables > > > >>> associated with it. I am using BAIJ. > > > >>> > > > >>> I thought block ILU(k) should be more efficient than the > point-wise ILU(k). > > > >>> After some numerical experiments, I found that the block ILU(K) is > much > > > >>> slower than the point-wise version. > > > >> Do you mean that it takes more iterations to converge, or that the > > > >> time per iteration is greater, or both? > > > >> > > > >> The number of iterations is very similar, but the timer per > iteration is greater. > > > >> > > > >> > > > >>> > > > >>> Any thoughts? > > > >>> > > > >>> Fande, > > > >> > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 7 15:07:33 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 7 Mar 2017 15:07:33 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> Message-ID: The matrix is too small. Please post ONE big matrix > On Mar 7, 2017, at 2:26 PM, Kong, Fande wrote: > > Uploaded to google drive, and sent you links in another email. Not sure if it works or not. > > Fande, > > On Tue, Mar 7, 2017 at 12:29 PM, Barry Smith wrote: > > It is too big for email you can post it somewhere so we can download it. > > > > On Mar 7, 2017, at 12:01 PM, Kong, Fande wrote: > > > > > > > > On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: > > I checked > > MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), > > they are virtually same. Why the version for BAIJ is so much slower? > > I'll investigate it. > > > > Fande, > > How large is your matrix? Is it possible to send us your matrix so I can test it? > > > > Thanks, Hong, > > > > It is a 3020875x3020875 matrix, and it is large. I can make a small one if you like, but not sure it will reproduce this issue or not. > > > > Fande, > > > > > > > > Hong > > > > > > On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith wrote: > > > > Thanks. Even the symbolic is slower for BAIJ. I don't like that, it definitely should not be since it is (at least should be) doing a symbolic factorization on a symbolic matrix 1/11th the size! > > > > Keep us informed. > > > > > > > > > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: > > > > > > Thanks, Barry, > > > > > > Log info: > > > > > > AIJ: > > > > > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 > > > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 > > > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > BAIJ: > > > > > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 > > > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 > > > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > It looks like both MatSolve and MatLUFactorNum are slower. > > > > > > I will try your suggestions. > > > > > > Fande > > > > > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith wrote: > > > > > > Note also that if the 11 by 11 blocks are actually sparse (and you don't store all the zeros in the blocks in the AIJ format) then then AIJ non-block factorization involves less floating point operations and less memory access so can be faster than the BAIJ format, depending on "how sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with AIJ (with zeros maybe in certain locations) then the above is not true. > > > > > > > > > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: > > > > > > > > > > > > This is because for block size 11 it is using calls to LAPACK/BLAS for the block operations instead of custom routines for that block size. > > > > > > > > Here is what you need to do. For a good sized case run both with -log_view and check the time spent in > > > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and BAIJ. If they have a different number of function calls then divide by the function call count to determine the time per function call. > > > > > > > > This will tell you which routine needs to be optimized first either MatLUFactorNumeric or MatSolve. My guess is MatSolve. > > > > > > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the block size of 11. > > > > > > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 it uses the new routine something like. > > > > > > > > if (both_identity) { > > > > if (b->bs == 11) > > > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > > > > } else { > > > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > > > > } > > > > > > > > Rerun and look at the new -log_view. Send all three -log_view to use at this point. If this optimization helps and now > > > > MatLUFactorNumeric is the time sink you can do the process to MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block custom version. > > > > > > > > Barry > > > > > > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: > > > >> > > > >> > > > >> > > > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan wrote: > > > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande wrote: > > > >>> Hi All, > > > >>> > > > >>> I am solving a nonlinear system whose Jacobian matrix has a block structure. > > > >>> More precisely, there is a mesh, and for each vertex there are 11 variables > > > >>> associated with it. I am using BAIJ. > > > >>> > > > >>> I thought block ILU(k) should be more efficient than the point-wise ILU(k). > > > >>> After some numerical experiments, I found that the block ILU(K) is much > > > >>> slower than the point-wise version. > > > >> Do you mean that it takes more iterations to converge, or that the > > > >> time per iteration is greater, or both? > > > >> > > > >> The number of iterations is very similar, but the timer per iteration is greater. > > > >> > > > >> > > > >>> > > > >>> Any thoughts? > > > >>> > > > >>> Fande, > > > >> > > > > > > > > > > > > > > > > > > From hzhang at mcs.anl.gov Tue Mar 7 15:17:21 2017 From: hzhang at mcs.anl.gov (Hong) Date: Tue, 7 Mar 2017 15:17:21 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> Message-ID: Fande, Got it. Below are what I get: petsc/src/ksp/ksp/examples/tutorials (master) $ ./ex10 -f0 binaryoutput -rhs 0 -mat_view ascii::ascii_info Mat Object: 1 MPI processes type: seqaij rows=8019, cols=8019, bs=11 total: nonzeros=1890625, allocated nonzeros=1890625 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 2187 nodes, limit used is 5 Number of iterations = 3 Residual norm 0.00200589 -mat_type aij MatMult 4 1.0 8.3621e-03 1.0 1.51e+07 1.0 0.0e+00 0.0e+00 0.0e+00 6 7 0 0 0 7 7 0 0 0 1805 MatSolve 4 1.0 8.3971e-03 1.0 1.51e+07 1.0 0.0e+00 0.0e+00 0.0e+00 6 7 0 0 0 7 7 0 0 0 1797 MatLUFactorNum 1 1.0 8.6171e-02 1.0 1.80e+08 1.0 0.0e+00 0.0e+00 0.0e+00 57 85 0 0 0 70 85 0 0 0 2086 MatILUFactorSym 1 1.0 1.4951e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 10 0 0 0 0 12 0 0 0 0 0 -mat_type baij MatMult 4 1.0 5.5540e-03 1.0 1.51e+07 1.0 0.0e+00 0.0e+00 0.0e+00 4 5 0 0 0 7 5 0 0 0 2718 MatSolve 4 1.0 7.0803e-03 1.0 1.48e+07 1.0 0.0e+00 0.0e+00 0.0e+00 5 5 0 0 0 8 5 0 0 0 2086 MatLUFactorNum 1 1.0 6.0118e-02 1.0 2.55e+08 1.0 0.0e+00 0.0e+00 0.0e+00 42 89 0 0 0 72 89 0 0 0 4241 MatILUFactorSym 1 1.0 6.7251e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 5 0 0 0 0 8 0 0 0 0 0 I ran it on my macpro. baij is faster than aij in all routines. Hong On Tue, Mar 7, 2017 at 2:26 PM, Kong, Fande wrote: > Uploaded to google drive, and sent you links in another email. Not sure if > it works or not. > > Fande, > > On Tue, Mar 7, 2017 at 12:29 PM, Barry Smith wrote: > >> >> It is too big for email you can post it somewhere so we can download >> it. >> >> >> >> > On Mar 7, 2017, at 12:01 PM, Kong, Fande wrote: >> > >> > >> > >> > On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: >> > I checked >> > MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), >> > they are virtually same. Why the version for BAIJ is so much slower? >> > I'll investigate it. >> > >> > Fande, >> > How large is your matrix? Is it possible to send us your matrix so I >> can test it? >> > >> > Thanks, Hong, >> > >> > It is a 3020875x3020875 matrix, and it is large. I can make a small one >> if you like, but not sure it will reproduce this issue or not. >> > >> > Fande, >> > >> > >> > >> > Hong >> > >> > >> > On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith wrote: >> > >> > Thanks. Even the symbolic is slower for BAIJ. I don't like that, it >> definitely should not be since it is (at least should be) doing a symbolic >> factorization on a symbolic matrix 1/11th the size! >> > >> > Keep us informed. >> > >> > >> > >> > > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: >> > > >> > > Thanks, Barry, >> > > >> > > Log info: >> > > >> > > AIJ: >> > > >> > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 >> 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 >> > > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 >> 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 >> > > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> > > >> > > BAIJ: >> > > >> > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 >> 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 >> > > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 >> 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 >> > > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> > > >> > > It looks like both MatSolve and MatLUFactorNum are slower. >> > > >> > > I will try your suggestions. >> > > >> > > Fande >> > > >> > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith >> wrote: >> > > >> > > Note also that if the 11 by 11 blocks are actually sparse (and you >> don't store all the zeros in the blocks in the AIJ format) then then AIJ >> non-block factorization involves less floating point operations and less >> memory access so can be faster than the BAIJ format, depending on "how >> sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with >> AIJ (with zeros maybe in certain locations) then the above is not true. >> > > >> > > >> > > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: >> > > > >> > > > >> > > > This is because for block size 11 it is using calls to >> LAPACK/BLAS for the block operations instead of custom routines for that >> block size. >> > > > >> > > > Here is what you need to do. For a good sized case run both with >> -log_view and check the time spent in >> > > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and >> BAIJ. If they have a different number of function calls then divide by the >> function call count to determine the time per function call. >> > > > >> > > > This will tell you which routine needs to be optimized first >> either MatLUFactorNumeric or MatSolve. My guess is MatSolve. >> > > > >> > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the >> function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function >> MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the >> block size of 11. >> > > > >> > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is >> 11 it uses the new routine something like. >> > > > >> > > > if (both_identity) { >> > > > if (b->bs == 11) >> > > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; >> > > > } else { >> > > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; >> > > > } >> > > > >> > > > Rerun and look at the new -log_view. Send all three -log_view to >> use at this point. If this optimization helps and now >> > > > MatLUFactorNumeric is the time sink you can do the process to >> MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block >> custom version. >> > > > >> > > > Barry >> > > > >> > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande >> wrote: >> > > >> >> > > >> >> > > >> >> > > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan < >> patrick.sanan at gmail.com> wrote: >> > > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande >> wrote: >> > > >>> Hi All, >> > > >>> >> > > >>> I am solving a nonlinear system whose Jacobian matrix has a block >> structure. >> > > >>> More precisely, there is a mesh, and for each vertex there are 11 >> variables >> > > >>> associated with it. I am using BAIJ. >> > > >>> >> > > >>> I thought block ILU(k) should be more efficient than the >> point-wise ILU(k). >> > > >>> After some numerical experiments, I found that the block ILU(K) >> is much >> > > >>> slower than the point-wise version. >> > > >> Do you mean that it takes more iterations to converge, or that the >> > > >> time per iteration is greater, or both? >> > > >> >> > > >> The number of iterations is very similar, but the timer per >> iteration is greater. >> > > >> >> > > >> >> > > >>> >> > > >>> Any thoughts? >> > > >>> >> > > >>> Fande, >> > > >> >> > > > >> > > >> > > >> > >> > >> > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fande.kong at inl.gov Tue Mar 7 15:26:07 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Tue, 7 Mar 2017 14:26:07 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> Message-ID: On Tue, Mar 7, 2017 at 2:07 PM, Barry Smith wrote: > > The matrix is too small. Please post ONE big matrix > I am using "-ksp_view_pmat binary" to save the matrix. How can I save the latest one only for a time-dependent problem? Fande, > > > On Mar 7, 2017, at 2:26 PM, Kong, Fande wrote: > > > > Uploaded to google drive, and sent you links in another email. Not sure > if it works or not. > > > > Fande, > > > > On Tue, Mar 7, 2017 at 12:29 PM, Barry Smith wrote: > > > > It is too big for email you can post it somewhere so we can download > it. > > > > > > > On Mar 7, 2017, at 12:01 PM, Kong, Fande wrote: > > > > > > > > > > > > On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: > > > I checked > > > MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), > > > they are virtually same. Why the version for BAIJ is so much slower? > > > I'll investigate it. > > > > > > Fande, > > > How large is your matrix? Is it possible to send us your matrix so I > can test it? > > > > > > Thanks, Hong, > > > > > > It is a 3020875x3020875 matrix, and it is large. I can make a small > one if you like, but not sure it will reproduce this issue or not. > > > > > > Fande, > > > > > > > > > > > > Hong > > > > > > > > > On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith > wrote: > > > > > > Thanks. Even the symbolic is slower for BAIJ. I don't like that, it > definitely should not be since it is (at least should be) doing a symbolic > factorization on a symbolic matrix 1/11th the size! > > > > > > Keep us informed. > > > > > > > > > > > > > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: > > > > > > > > Thanks, Barry, > > > > > > > > Log info: > > > > > > > > AIJ: > > > > > > > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 > 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 > > > > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 > 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 > > > > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 > 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > > > BAIJ: > > > > > > > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 > 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 > > > > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 > 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 > > > > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 > 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > > > It looks like both MatSolve and MatLUFactorNum are slower. > > > > > > > > I will try your suggestions. > > > > > > > > Fande > > > > > > > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith > wrote: > > > > > > > > Note also that if the 11 by 11 blocks are actually sparse (and you > don't store all the zeros in the blocks in the AIJ format) then then AIJ > non-block factorization involves less floating point operations and less > memory access so can be faster than the BAIJ format, depending on "how > sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with > AIJ (with zeros maybe in certain locations) then the above is not true. > > > > > > > > > > > > > On Mar 6, 2017, at 5:10 PM, Barry Smith > wrote: > > > > > > > > > > > > > > > This is because for block size 11 it is using calls to > LAPACK/BLAS for the block operations instead of custom routines for that > block size. > > > > > > > > > > Here is what you need to do. For a good sized case run both with > -log_view and check the time spent in > > > > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ > and BAIJ. If they have a different number of function calls then divide by > the function call count to determine the time per function call. > > > > > > > > > > This will tell you which routine needs to be optimized first > either MatLUFactorNumeric or MatSolve. My guess is MatSolve. > > > > > > > > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the > function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function > MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the > block size of 11. > > > > > > > > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is > 11 it uses the new routine something like. > > > > > > > > > > if (both_identity) { > > > > > if (b->bs == 11) > > > > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > > > > > } else { > > > > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > > > > > } > > > > > > > > > > Rerun and look at the new -log_view. Send all three -log_view to > use at this point. If this optimization helps and now > > > > > MatLUFactorNumeric is the time sink you can do the process to > MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block > custom version. > > > > > > > > > > Barry > > > > > > > > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande > wrote: > > > > >> > > > > >> > > > > >> > > > > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan < > patrick.sanan at gmail.com> wrote: > > > > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande > wrote: > > > > >>> Hi All, > > > > >>> > > > > >>> I am solving a nonlinear system whose Jacobian matrix has a > block structure. > > > > >>> More precisely, there is a mesh, and for each vertex there are > 11 variables > > > > >>> associated with it. I am using BAIJ. > > > > >>> > > > > >>> I thought block ILU(k) should be more efficient than the > point-wise ILU(k). > > > > >>> After some numerical experiments, I found that the block ILU(K) > is much > > > > >>> slower than the point-wise version. > > > > >> Do you mean that it takes more iterations to converge, or that the > > > > >> time per iteration is greater, or both? > > > > >> > > > > >> The number of iterations is very similar, but the timer per > iteration is greater. > > > > >> > > > > >> > > > > >>> > > > > >>> Any thoughts? > > > > >>> > > > > >>> Fande, > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 7 15:35:16 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 7 Mar 2017 15:35:16 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> Message-ID: > On Mar 7, 2017, at 3:26 PM, Kong, Fande wrote: > > > > On Tue, Mar 7, 2017 at 2:07 PM, Barry Smith wrote: > > The matrix is too small. Please post ONE big matrix > > I am using "-ksp_view_pmat binary" to save the matrix. How can I save the latest one only for a time-dependent problem? No easy way. You can send us the first matrix or you can use bin/PetscBinaryIO.py to cut out one matrix from the file. > > > Fande, > > > > > On Mar 7, 2017, at 2:26 PM, Kong, Fande wrote: > > > > Uploaded to google drive, and sent you links in another email. Not sure if it works or not. > > > > Fande, > > > > On Tue, Mar 7, 2017 at 12:29 PM, Barry Smith wrote: > > > > It is too big for email you can post it somewhere so we can download it. > > > > > > > On Mar 7, 2017, at 12:01 PM, Kong, Fande wrote: > > > > > > > > > > > > On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: > > > I checked > > > MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), > > > they are virtually same. Why the version for BAIJ is so much slower? > > > I'll investigate it. > > > > > > Fande, > > > How large is your matrix? Is it possible to send us your matrix so I can test it? > > > > > > Thanks, Hong, > > > > > > It is a 3020875x3020875 matrix, and it is large. I can make a small one if you like, but not sure it will reproduce this issue or not. > > > > > > Fande, > > > > > > > > > > > > Hong > > > > > > > > > On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith wrote: > > > > > > Thanks. Even the symbolic is slower for BAIJ. I don't like that, it definitely should not be since it is (at least should be) doing a symbolic factorization on a symbolic matrix 1/11th the size! > > > > > > Keep us informed. > > > > > > > > > > > > > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: > > > > > > > > Thanks, Barry, > > > > > > > > Log info: > > > > > > > > AIJ: > > > > > > > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 > > > > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 > > > > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > > > BAIJ: > > > > > > > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 > > > > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 > > > > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > > > It looks like both MatSolve and MatLUFactorNum are slower. > > > > > > > > I will try your suggestions. > > > > > > > > Fande > > > > > > > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith wrote: > > > > > > > > Note also that if the 11 by 11 blocks are actually sparse (and you don't store all the zeros in the blocks in the AIJ format) then then AIJ non-block factorization involves less floating point operations and less memory access so can be faster than the BAIJ format, depending on "how sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with AIJ (with zeros maybe in certain locations) then the above is not true. > > > > > > > > > > > > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: > > > > > > > > > > > > > > > This is because for block size 11 it is using calls to LAPACK/BLAS for the block operations instead of custom routines for that block size. > > > > > > > > > > Here is what you need to do. For a good sized case run both with -log_view and check the time spent in > > > > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and BAIJ. If they have a different number of function calls then divide by the function call count to determine the time per function call. > > > > > > > > > > This will tell you which routine needs to be optimized first either MatLUFactorNumeric or MatSolve. My guess is MatSolve. > > > > > > > > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the block size of 11. > > > > > > > > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 it uses the new routine something like. > > > > > > > > > > if (both_identity) { > > > > > if (b->bs == 11) > > > > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > > > > > } else { > > > > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > > > > > } > > > > > > > > > > Rerun and look at the new -log_view. Send all three -log_view to use at this point. If this optimization helps and now > > > > > MatLUFactorNumeric is the time sink you can do the process to MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block custom version. > > > > > > > > > > Barry > > > > > > > > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: > > > > >> > > > > >> > > > > >> > > > > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan wrote: > > > > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande wrote: > > > > >>> Hi All, > > > > >>> > > > > >>> I am solving a nonlinear system whose Jacobian matrix has a block structure. > > > > >>> More precisely, there is a mesh, and for each vertex there are 11 variables > > > > >>> associated with it. I am using BAIJ. > > > > >>> > > > > >>> I thought block ILU(k) should be more efficient than the point-wise ILU(k). > > > > >>> After some numerical experiments, I found that the block ILU(K) is much > > > > >>> slower than the point-wise version. > > > > >> Do you mean that it takes more iterations to converge, or that the > > > > >> time per iteration is greater, or both? > > > > >> > > > > >> The number of iterations is very similar, but the timer per iteration is greater. > > > > >> > > > > >> > > > > >>> > > > > >>> Any thoughts? > > > > >>> > > > > >>> Fande, > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > From jed at jedbrown.org Tue Mar 7 16:16:48 2017 From: jed at jedbrown.org (Jed Brown) Date: Tue, 07 Mar 2017 15:16:48 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> Message-ID: <87efy891gv.fsf@jedbrown.org> Hong writes: > Fande, > Got it. Below are what I get: Is Fande using ILU(0) or ILU(k)? (And I think it should be possible to get a somewhat larger benefit.) > petsc/src/ksp/ksp/examples/tutorials (master) > $ ./ex10 -f0 binaryoutput -rhs 0 -mat_view ascii::ascii_info > Mat Object: 1 MPI processes > type: seqaij > rows=8019, cols=8019, bs=11 > total: nonzeros=1890625, allocated nonzeros=1890625 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 2187 nodes, limit used is 5 > Number of iterations = 3 > Residual norm 0.00200589 > > -mat_type aij > MatMult 4 1.0 8.3621e-03 1.0 1.51e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 6 7 0 0 0 7 7 0 0 0 1805 > MatSolve 4 1.0 8.3971e-03 1.0 1.51e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 6 7 0 0 0 7 7 0 0 0 1797 > MatLUFactorNum 1 1.0 8.6171e-02 1.0 1.80e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 57 85 0 0 0 70 85 0 0 0 2086 > MatILUFactorSym 1 1.0 1.4951e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 10 0 0 0 0 12 0 0 0 0 0 > > -mat_type baij > MatMult 4 1.0 5.5540e-03 1.0 1.51e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 4 5 0 0 0 7 5 0 0 0 2718 > MatSolve 4 1.0 7.0803e-03 1.0 1.48e+07 1.0 0.0e+00 0.0e+00 > 0.0e+00 5 5 0 0 0 8 5 0 0 0 2086 > MatLUFactorNum 1 1.0 6.0118e-02 1.0 2.55e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 42 89 0 0 0 72 89 0 0 0 4241 > MatILUFactorSym 1 1.0 6.7251e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 5 0 0 0 0 8 0 0 0 0 0 > > I ran it on my macpro. baij is faster than aij in all routines. > > Hong > > On Tue, Mar 7, 2017 at 2:26 PM, Kong, Fande wrote: > >> Uploaded to google drive, and sent you links in another email. Not sure if >> it works or not. >> >> Fande, >> >> On Tue, Mar 7, 2017 at 12:29 PM, Barry Smith wrote: >> >>> >>> It is too big for email you can post it somewhere so we can download >>> it. >>> >>> >>> >>> > On Mar 7, 2017, at 12:01 PM, Kong, Fande wrote: >>> > >>> > >>> > >>> > On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: >>> > I checked >>> > MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), >>> > they are virtually same. Why the version for BAIJ is so much slower? >>> > I'll investigate it. >>> > >>> > Fande, >>> > How large is your matrix? Is it possible to send us your matrix so I >>> can test it? >>> > >>> > Thanks, Hong, >>> > >>> > It is a 3020875x3020875 matrix, and it is large. I can make a small one >>> if you like, but not sure it will reproduce this issue or not. >>> > >>> > Fande, >>> > >>> > >>> > >>> > Hong >>> > >>> > >>> > On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith wrote: >>> > >>> > Thanks. Even the symbolic is slower for BAIJ. I don't like that, it >>> definitely should not be since it is (at least should be) doing a symbolic >>> factorization on a symbolic matrix 1/11th the size! >>> > >>> > Keep us informed. >>> > >>> > >>> > >>> > > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: >>> > > >>> > > Thanks, Barry, >>> > > >>> > > Log info: >>> > > >>> > > AIJ: >>> > > >>> > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 >>> 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 >>> > > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 >>> 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 >>> > > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> > > >>> > > BAIJ: >>> > > >>> > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 >>> 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 >>> > > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 >>> 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 >>> > > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> > > >>> > > It looks like both MatSolve and MatLUFactorNum are slower. >>> > > >>> > > I will try your suggestions. >>> > > >>> > > Fande >>> > > >>> > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith >>> wrote: >>> > > >>> > > Note also that if the 11 by 11 blocks are actually sparse (and you >>> don't store all the zeros in the blocks in the AIJ format) then then AIJ >>> non-block factorization involves less floating point operations and less >>> memory access so can be faster than the BAIJ format, depending on "how >>> sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with >>> AIJ (with zeros maybe in certain locations) then the above is not true. >>> > > >>> > > >>> > > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: >>> > > > >>> > > > >>> > > > This is because for block size 11 it is using calls to >>> LAPACK/BLAS for the block operations instead of custom routines for that >>> block size. >>> > > > >>> > > > Here is what you need to do. For a good sized case run both with >>> -log_view and check the time spent in >>> > > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and >>> BAIJ. If they have a different number of function calls then divide by the >>> function call count to determine the time per function call. >>> > > > >>> > > > This will tell you which routine needs to be optimized first >>> either MatLUFactorNumeric or MatSolve. My guess is MatSolve. >>> > > > >>> > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the >>> function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function >>> MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the >>> block size of 11. >>> > > > >>> > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is >>> 11 it uses the new routine something like. >>> > > > >>> > > > if (both_identity) { >>> > > > if (b->bs == 11) >>> > > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; >>> > > > } else { >>> > > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; >>> > > > } >>> > > > >>> > > > Rerun and look at the new -log_view. Send all three -log_view to >>> use at this point. If this optimization helps and now >>> > > > MatLUFactorNumeric is the time sink you can do the process to >>> MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block >>> custom version. >>> > > > >>> > > > Barry >>> > > > >>> > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande >>> wrote: >>> > > >> >>> > > >> >>> > > >> >>> > > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan < >>> patrick.sanan at gmail.com> wrote: >>> > > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande >>> wrote: >>> > > >>> Hi All, >>> > > >>> >>> > > >>> I am solving a nonlinear system whose Jacobian matrix has a block >>> structure. >>> > > >>> More precisely, there is a mesh, and for each vertex there are 11 >>> variables >>> > > >>> associated with it. I am using BAIJ. >>> > > >>> >>> > > >>> I thought block ILU(k) should be more efficient than the >>> point-wise ILU(k). >>> > > >>> After some numerical experiments, I found that the block ILU(K) >>> is much >>> > > >>> slower than the point-wise version. >>> > > >> Do you mean that it takes more iterations to converge, or that the >>> > > >> time per iteration is greater, or both? >>> > > >> >>> > > >> The number of iterations is very similar, but the timer per >>> iteration is greater. >>> > > >> >>> > > >> >>> > > >>> >>> > > >>> Any thoughts? >>> > > >>> >>> > > >>> Fande, >>> > > >> >>> > > > >>> > > >>> > > >>> > >>> > >>> > >>> >>> >> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From fande.kong at inl.gov Tue Mar 7 16:21:00 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Tue, 7 Mar 2017 15:21:00 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: <87efy891gv.fsf@jedbrown.org> References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> <87efy891gv.fsf@jedbrown.org> Message-ID: On Tue, Mar 7, 2017 at 3:16 PM, Jed Brown wrote: > Hong writes: > > > Fande, > > Got it. Below are what I get: > > Is Fande using ILU(0) or ILU(k)? (And I think it should be possible to > get a somewhat larger benefit.) > I am using ILU(0). Will it be much better to use ILU(k>0)? Fande, > > > petsc/src/ksp/ksp/examples/tutorials (master) > > $ ./ex10 -f0 binaryoutput -rhs 0 -mat_view ascii::ascii_info > > Mat Object: 1 MPI processes > > type: seqaij > > rows=8019, cols=8019, bs=11 > > total: nonzeros=1890625, allocated nonzeros=1890625 > > total number of mallocs used during MatSetValues calls =0 > > using I-node routines: found 2187 nodes, limit used is 5 > > Number of iterations = 3 > > Residual norm 0.00200589 > > > > -mat_type aij > > MatMult 4 1.0 8.3621e-03 1.0 1.51e+07 1.0 0.0e+00 0.0e+00 > > 0.0e+00 6 7 0 0 0 7 7 0 0 0 1805 > > MatSolve 4 1.0 8.3971e-03 1.0 1.51e+07 1.0 0.0e+00 0.0e+00 > > 0.0e+00 6 7 0 0 0 7 7 0 0 0 1797 > > MatLUFactorNum 1 1.0 8.6171e-02 1.0 1.80e+08 1.0 0.0e+00 0.0e+00 > > 0.0e+00 57 85 0 0 0 70 85 0 0 0 2086 > > MatILUFactorSym 1 1.0 1.4951e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > > 0.0e+00 10 0 0 0 0 12 0 0 0 0 0 > > > > -mat_type baij > > MatMult 4 1.0 5.5540e-03 1.0 1.51e+07 1.0 0.0e+00 0.0e+00 > > 0.0e+00 4 5 0 0 0 7 5 0 0 0 2718 > > MatSolve 4 1.0 7.0803e-03 1.0 1.48e+07 1.0 0.0e+00 0.0e+00 > > 0.0e+00 5 5 0 0 0 8 5 0 0 0 2086 > > MatLUFactorNum 1 1.0 6.0118e-02 1.0 2.55e+08 1.0 0.0e+00 0.0e+00 > > 0.0e+00 42 89 0 0 0 72 89 0 0 0 4241 > > MatILUFactorSym 1 1.0 6.7251e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > > 0.0e+00 5 0 0 0 0 8 0 0 0 0 0 > > > > I ran it on my macpro. baij is faster than aij in all routines. > > > > Hong > > > > On Tue, Mar 7, 2017 at 2:26 PM, Kong, Fande wrote: > > > >> Uploaded to google drive, and sent you links in another email. Not sure > if > >> it works or not. > >> > >> Fande, > >> > >> On Tue, Mar 7, 2017 at 12:29 PM, Barry Smith > wrote: > >> > >>> > >>> It is too big for email you can post it somewhere so we can download > >>> it. > >>> > >>> > >>> > >>> > On Mar 7, 2017, at 12:01 PM, Kong, Fande wrote: > >>> > > >>> > > >>> > > >>> > On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: > >>> > I checked > >>> > MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), > >>> > they are virtually same. Why the version for BAIJ is so much slower? > >>> > I'll investigate it. > >>> > > >>> > Fande, > >>> > How large is your matrix? Is it possible to send us your matrix so I > >>> can test it? > >>> > > >>> > Thanks, Hong, > >>> > > >>> > It is a 3020875x3020875 matrix, and it is large. I can make a small > one > >>> if you like, but not sure it will reproduce this issue or not. > >>> > > >>> > Fande, > >>> > > >>> > > >>> > > >>> > Hong > >>> > > >>> > > >>> > On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith > wrote: > >>> > > >>> > Thanks. Even the symbolic is slower for BAIJ. I don't like that, it > >>> definitely should not be since it is (at least should be) doing a > symbolic > >>> factorization on a symbolic matrix 1/11th the size! > >>> > > >>> > Keep us informed. > >>> > > >>> > > >>> > > >>> > > On Mar 6, 2017, at 5:44 PM, Kong, Fande > wrote: > >>> > > > >>> > > Thanks, Barry, > >>> > > > >>> > > Log info: > >>> > > > >>> > > AIJ: > >>> > > > >>> > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 > >>> 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 > >>> > > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 > >>> 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 > >>> > > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 > >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > >>> > > > >>> > > BAIJ: > >>> > > > >>> > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 > >>> 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 > >>> > > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 > >>> 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 > >>> > > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 > >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > >>> > > > >>> > > It looks like both MatSolve and MatLUFactorNum are slower. > >>> > > > >>> > > I will try your suggestions. > >>> > > > >>> > > Fande > >>> > > > >>> > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith > >>> wrote: > >>> > > > >>> > > Note also that if the 11 by 11 blocks are actually sparse (and > you > >>> don't store all the zeros in the blocks in the AIJ format) then then > AIJ > >>> non-block factorization involves less floating point operations and > less > >>> memory access so can be faster than the BAIJ format, depending on "how > >>> sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks > with > >>> AIJ (with zeros maybe in certain locations) then the above is not true. > >>> > > > >>> > > > >>> > > > On Mar 6, 2017, at 5:10 PM, Barry Smith > wrote: > >>> > > > > >>> > > > > >>> > > > This is because for block size 11 it is using calls to > >>> LAPACK/BLAS for the block operations instead of custom routines for > that > >>> block size. > >>> > > > > >>> > > > Here is what you need to do. For a good sized case run both > with > >>> -log_view and check the time spent in > >>> > > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ > and > >>> BAIJ. If they have a different number of function calls then divide by > the > >>> function call count to determine the time per function call. > >>> > > > > >>> > > > This will tell you which routine needs to be optimized first > >>> either MatLUFactorNumeric or MatSolve. My guess is MatSolve. > >>> > > > > >>> > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the > >>> function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function > >>> MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for > the > >>> block size of 11. > >>> > > > > >>> > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size > is > >>> 11 it uses the new routine something like. > >>> > > > > >>> > > > if (both_identity) { > >>> > > > if (b->bs == 11) > >>> > > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > >>> > > > } else { > >>> > > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > >>> > > > } > >>> > > > > >>> > > > Rerun and look at the new -log_view. Send all three -log_view > to > >>> use at this point. If this optimization helps and now > >>> > > > MatLUFactorNumeric is the time sink you can do the process to > >>> MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size > block > >>> custom version. > >>> > > > > >>> > > > Barry > >>> > > > > >>> > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande > >>> wrote: > >>> > > >> > >>> > > >> > >>> > > >> > >>> > > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan < > >>> patrick.sanan at gmail.com> wrote: > >>> > > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande > > >>> wrote: > >>> > > >>> Hi All, > >>> > > >>> > >>> > > >>> I am solving a nonlinear system whose Jacobian matrix has a > block > >>> structure. > >>> > > >>> More precisely, there is a mesh, and for each vertex there are > 11 > >>> variables > >>> > > >>> associated with it. I am using BAIJ. > >>> > > >>> > >>> > > >>> I thought block ILU(k) should be more efficient than the > >>> point-wise ILU(k). > >>> > > >>> After some numerical experiments, I found that the block ILU(K) > >>> is much > >>> > > >>> slower than the point-wise version. > >>> > > >> Do you mean that it takes more iterations to converge, or that > the > >>> > > >> time per iteration is greater, or both? > >>> > > >> > >>> > > >> The number of iterations is very similar, but the timer per > >>> iteration is greater. > >>> > > >> > >>> > > >> > >>> > > >>> > >>> > > >>> Any thoughts? > >>> > > >>> > >>> > > >>> Fande, > >>> > > >> > >>> > > > > >>> > > > >>> > > > >>> > > >>> > > >>> > > >>> > >>> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Mar 7 16:26:51 2017 From: jed at jedbrown.org (Jed Brown) Date: Tue, 07 Mar 2017 15:26:51 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> <87efy891gv.fsf@jedbrown.org> Message-ID: <878tog9104.fsf@jedbrown.org> "Kong, Fande" writes: > On Tue, Mar 7, 2017 at 3:16 PM, Jed Brown wrote: > >> Hong writes: >> >> > Fande, >> > Got it. Below are what I get: >> >> Is Fande using ILU(0) or ILU(k)? (And I think it should be possible to >> get a somewhat larger benefit.) >> > > > I am using ILU(0). Will it be much better to use ILU(k>0)? It'll be slower, but might converge faster. You asked about ILU(k) so I assumed you were interested in k>0. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From fande.kong at inl.gov Tue Mar 7 16:35:04 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Tue, 7 Mar 2017 15:35:04 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: <878tog9104.fsf@jedbrown.org> References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> <87efy891gv.fsf@jedbrown.org> <878tog9104.fsf@jedbrown.org> Message-ID: I found one issue on my side. The preallocation is not right for the BAIJ matrix. Will this slow down MatLUFactor and MatSolve? How to converge AIJ to BAIJ using a command-line option? Fande, On Tue, Mar 7, 2017 at 3:26 PM, Jed Brown wrote: > "Kong, Fande" writes: > > > On Tue, Mar 7, 2017 at 3:16 PM, Jed Brown wrote: > > > >> Hong writes: > >> > >> > Fande, > >> > Got it. Below are what I get: > >> > >> Is Fande using ILU(0) or ILU(k)? (And I think it should be possible to > >> get a somewhat larger benefit.) > >> > > > > > > I am using ILU(0). Will it be much better to use ILU(k>0)? > > It'll be slower, but might converge faster. You asked about ILU(k) so I > assumed you were interested in k>0. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Mar 7 17:10:17 2017 From: hzhang at mcs.anl.gov (Hong) Date: Tue, 7 Mar 2017 17:10:17 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> <87efy891gv.fsf@jedbrown.org> <878tog9104.fsf@jedbrown.org> Message-ID: Fande : > I found one issue on my side. The preallocation is not right for the BAIJ > matrix. Will this slow down MatLUFactor and MatSolve? > preallocation should not affect ilu(0). > > How to converge AIJ to BAIJ using a command-line option? > -mat_type aij or -mat_type baij Hong > > > Fande, > > On Tue, Mar 7, 2017 at 3:26 PM, Jed Brown wrote: > >> "Kong, Fande" writes: >> >> > On Tue, Mar 7, 2017 at 3:16 PM, Jed Brown wrote: >> > >> >> Hong writes: >> >> >> >> > Fande, >> >> > Got it. Below are what I get: >> >> >> >> Is Fande using ILU(0) or ILU(k)? (And I think it should be possible to >> >> get a somewhat larger benefit.) >> >> >> > >> > >> > I am using ILU(0). Will it be much better to use ILU(k>0)? >> >> It'll be slower, but might converge faster. You asked about ILU(k) so I >> assumed you were interested in k>0. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 7 20:37:10 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 7 Mar 2017 20:37:10 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> <87efy891gv.fsf@jedbrown.org> <878tog9104.fsf@jedbrown.org> Message-ID: > On Mar 7, 2017, at 4:35 PM, Kong, Fande wrote: > > I found one issue on my side. The preallocation is not right for the BAIJ matrix. Will this slow down MatLUFactor and MatSolve? No, but you should still fix it. > > How to converge AIJ to BAIJ using a command-line option? Instead of using MatCreateSeq/MPIAIJ() at the command line you would use MatCreate() MatSetSizes() MatSetBlockSize() MatSetFromOptions() MatMPIAIJSetPreallocation() MatMPIBAIJSetPreallocation() and any other preallocations you want MatSetValues.....MatAssemblyBegin/End() Then you can use -mat_type baij or aij to set the type. Barry > > Fande, > > On Tue, Mar 7, 2017 at 3:26 PM, Jed Brown wrote: > "Kong, Fande" writes: > > > On Tue, Mar 7, 2017 at 3:16 PM, Jed Brown wrote: > > > >> Hong writes: > >> > >> > Fande, > >> > Got it. Below are what I get: > >> > >> Is Fande using ILU(0) or ILU(k)? (And I think it should be possible to > >> get a somewhat larger benefit.) > >> > > > > > > I am using ILU(0). Will it be much better to use ILU(k>0)? > > It'll be slower, but might converge faster. You asked about ILU(k) so I > assumed you were interested in k>0. > From bsmith at mcs.anl.gov Tue Mar 7 20:55:32 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 7 Mar 2017 20:55:32 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> Message-ID: <1D6AF9F8-3AEF-42F0-9FF6-DDFA9B73D2E6@mcs.anl.gov> I have run your larger matrix on my laptop with "default" optimization (so --with-debugging=0) this is what I get ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ AIJ MatMult 5 1.0 7.7636e-02 1.0 1.42e+08 1.0 0.0e+00 0.0e+00 0.0e+00 12 16 0 0 0 16 16 0 0 0 1830 MatSolve 5 1.0 7.8164e-02 1.0 1.42e+08 1.0 0.0e+00 0.0e+00 0.0e+00 12 16 0 0 0 16 16 0 0 0 1818 MatLUFactorNum 1 1.0 2.3056e-01 1.0 5.95e+08 1.0 0.0e+00 0.0e+00 0.0e+00 35 67 0 0 0 46 67 0 0 0 2580 MatILUFactorSym 1 1.0 8.3201e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 13 0 0 0 0 17 0 0 0 0 0 BAIJ MatMult 5 1.0 5.3482e-02 1.0 1.42e+08 1.0 0.0e+00 0.0e+00 0.0e+00 6 6 0 0 0 9 6 0 0 0 2657 MatSolve 5 1.0 6.2669e-02 1.0 1.39e+08 1.0 0.0e+00 0.0e+00 0.0e+00 7 6 0 0 0 11 6 0 0 0 2224 MatLUFactorNum 1 1.0 3.7688e-01 1.0 2.12e+09 1.0 0.0e+00 0.0e+00 0.0e+00 40 88 0 0 0 66 88 0 0 0 5635 MatILUFactorSym 1 1.0 4.4828e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 5 0 0 0 0 8 0 0 0 0 0 So BAIJ symbolic is faster (which definitely should be). BAIJ MatMult and MatSolve are also faster, the numerical BAIJ factorization is slower. Providing custom code for block size 11 should definitely improve the performance of all three of these. I note that the number of iterations 5 is much less than in the case you emailed originally? Is this really the matrix of interest? Barry > On Mar 7, 2017, at 3:26 PM, Kong, Fande wrote: > > > > On Tue, Mar 7, 2017 at 2:07 PM, Barry Smith wrote: > > The matrix is too small. Please post ONE big matrix > > I am using "-ksp_view_pmat binary" to save the matrix. How can I save the latest one only for a time-dependent problem? > > > Fande, > > > > > On Mar 7, 2017, at 2:26 PM, Kong, Fande wrote: > > > > Uploaded to google drive, and sent you links in another email. Not sure if it works or not. > > > > Fande, > > > > On Tue, Mar 7, 2017 at 12:29 PM, Barry Smith wrote: > > > > It is too big for email you can post it somewhere so we can download it. > > > > > > > On Mar 7, 2017, at 12:01 PM, Kong, Fande wrote: > > > > > > > > > > > > On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: > > > I checked > > > MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), > > > they are virtually same. Why the version for BAIJ is so much slower? > > > I'll investigate it. > > > > > > Fande, > > > How large is your matrix? Is it possible to send us your matrix so I can test it? > > > > > > Thanks, Hong, > > > > > > It is a 3020875x3020875 matrix, and it is large. I can make a small one if you like, but not sure it will reproduce this issue or not. > > > > > > Fande, > > > > > > > > > > > > Hong > > > > > > > > > On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith wrote: > > > > > > Thanks. Even the symbolic is slower for BAIJ. I don't like that, it definitely should not be since it is (at least should be) doing a symbolic factorization on a symbolic matrix 1/11th the size! > > > > > > Keep us informed. > > > > > > > > > > > > > On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: > > > > > > > > Thanks, Barry, > > > > > > > > Log info: > > > > > > > > AIJ: > > > > > > > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 > > > > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 > > > > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > > > BAIJ: > > > > > > > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 > > > > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 > > > > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > > > It looks like both MatSolve and MatLUFactorNum are slower. > > > > > > > > I will try your suggestions. > > > > > > > > Fande > > > > > > > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith wrote: > > > > > > > > Note also that if the 11 by 11 blocks are actually sparse (and you don't store all the zeros in the blocks in the AIJ format) then then AIJ non-block factorization involves less floating point operations and less memory access so can be faster than the BAIJ format, depending on "how sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with AIJ (with zeros maybe in certain locations) then the above is not true. > > > > > > > > > > > > > On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: > > > > > > > > > > > > > > > This is because for block size 11 it is using calls to LAPACK/BLAS for the block operations instead of custom routines for that block size. > > > > > > > > > > Here is what you need to do. For a good sized case run both with -log_view and check the time spent in > > > > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and BAIJ. If they have a different number of function calls then divide by the function call count to determine the time per function call. > > > > > > > > > > This will tell you which routine needs to be optimized first either MatLUFactorNumeric or MatSolve. My guess is MatSolve. > > > > > > > > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the block size of 11. > > > > > > > > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 it uses the new routine something like. > > > > > > > > > > if (both_identity) { > > > > > if (b->bs == 11) > > > > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > > > > > } else { > > > > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > > > > > } > > > > > > > > > > Rerun and look at the new -log_view. Send all three -log_view to use at this point. If this optimization helps and now > > > > > MatLUFactorNumeric is the time sink you can do the process to MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block custom version. > > > > > > > > > > Barry > > > > > > > > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: > > > > >> > > > > >> > > > > >> > > > > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan wrote: > > > > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande wrote: > > > > >>> Hi All, > > > > >>> > > > > >>> I am solving a nonlinear system whose Jacobian matrix has a block structure. > > > > >>> More precisely, there is a mesh, and for each vertex there are 11 variables > > > > >>> associated with it. I am using BAIJ. > > > > >>> > > > > >>> I thought block ILU(k) should be more efficient than the point-wise ILU(k). > > > > >>> After some numerical experiments, I found that the block ILU(K) is much > > > > >>> slower than the point-wise version. > > > > >> Do you mean that it takes more iterations to converge, or that the > > > > >> time per iteration is greater, or both? > > > > >> > > > > >> The number of iterations is very similar, but the timer per iteration is greater. > > > > >> > > > > >> > > > > >>> > > > > >>> Any thoughts? > > > > >>> > > > > >>> Fande, > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > From fdkong.jd at gmail.com Tue Mar 7 21:41:01 2017 From: fdkong.jd at gmail.com (Fande Kong) Date: Tue, 7 Mar 2017 20:41:01 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> <87efy891gv.fsf@jedbrown.org> <878tog9104.fsf@jedbrown.org> Message-ID: On Tue, Mar 7, 2017 at 7:37 PM, Barry Smith wrote: > > > On Mar 7, 2017, at 4:35 PM, Kong, Fande wrote: > > > > I found one issue on my side. The preallocation is not right for the > BAIJ matrix. Will this slow down MatLUFactor and MatSolve? > > No, but you should still fix it. > > > > > How to converge AIJ to BAIJ using a command-line option? > > Instead of using MatCreateSeq/MPIAIJ() at the command line you would use > > MatCreate() > MatSetSizes() > MatSetBlockSize() > MatSetFromOptions() > MatSetFromOptions() has to be called before "MatXXXXSetPreallocation"? What happens if I call MatSetFromOptions() right after "MatXXXXSetPreallocation"? > MatMPIAIJSetPreallocation() > MatMPIBAIJSetPreallocation() and any other preallocations you want > MatSetValues.....MatAssemblyBegin/End() > > Then you can use -mat_type baij or aij to set the type. > > Barry > > > > > Fande, > > > > On Tue, Mar 7, 2017 at 3:26 PM, Jed Brown wrote: > > "Kong, Fande" writes: > > > > > On Tue, Mar 7, 2017 at 3:16 PM, Jed Brown wrote: > > > > > >> Hong writes: > > >> > > >> > Fande, > > >> > Got it. Below are what I get: > > >> > > >> Is Fande using ILU(0) or ILU(k)? (And I think it should be possible > to > > >> get a somewhat larger benefit.) > > >> > > > > > > > > > I am using ILU(0). Will it be much better to use ILU(k>0)? > > > > It'll be slower, but might converge faster. You asked about ILU(k) so I > > assumed you were interested in k>0. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 7 21:41:21 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 7 Mar 2017 21:41:21 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: <1D6AF9F8-3AEF-42F0-9FF6-DDFA9B73D2E6@mcs.anl.gov> References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> <1D6AF9F8-3AEF-42F0-9FF6-DDFA9B73D2E6@mcs.anl.gov> Message-ID: <7D8AB891-767A-4286-B055-EF7E827D08CC@mcs.anl.gov> Just for kicks I added MatMult_SeqBAIJ_11 to master and obtained a new MatMult 5 1.0 4.4513e-02 1.0 1.94e+08 1.0 0.0e+00 0.0e+00 0.0e+00 5 8 0 0 0 8 8 0 0 0 2918 which demonstrates how the custom routines for different sizes can improve the performance. Note that better prefetching hints and use of SIMD instructions for KNL could potentially improve the performance (a great deal) more. What hardware are you running on? > On Mar 7, 2017, at 8:55 PM, Barry Smith wrote: > > > I have run your larger matrix on my laptop with "default" optimization (so --with-debugging=0) this is what I get > > ------------------------------------------------------------------------------------------------------------------------ > Event Count Time (sec) Flop --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > > AIJ > > MatMult 5 1.0 7.7636e-02 1.0 1.42e+08 1.0 0.0e+00 0.0e+00 0.0e+00 12 16 0 0 0 16 16 0 0 0 1830 > MatSolve 5 1.0 7.8164e-02 1.0 1.42e+08 1.0 0.0e+00 0.0e+00 0.0e+00 12 16 0 0 0 16 16 0 0 0 1818 > MatLUFactorNum 1 1.0 2.3056e-01 1.0 5.95e+08 1.0 0.0e+00 0.0e+00 0.0e+00 35 67 0 0 0 46 67 0 0 0 2580 > MatILUFactorSym 1 1.0 8.3201e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 13 0 0 0 0 17 0 0 0 0 0 > > BAIJ > > MatMult 5 1.0 5.3482e-02 1.0 1.42e+08 1.0 0.0e+00 0.0e+00 0.0e+00 6 6 0 0 0 9 6 0 0 0 2657 > MatSolve 5 1.0 6.2669e-02 1.0 1.39e+08 1.0 0.0e+00 0.0e+00 0.0e+00 7 6 0 0 0 11 6 0 0 0 2224 > MatLUFactorNum 1 1.0 3.7688e-01 1.0 2.12e+09 1.0 0.0e+00 0.0e+00 0.0e+00 40 88 0 0 0 66 88 0 0 0 5635 > MatILUFactorSym 1 1.0 4.4828e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 5 0 0 0 0 8 0 0 0 0 0 > > So BAIJ symbolic is faster (which definitely should be). BAIJ MatMult and MatSolve are also faster, the numerical BAIJ factorization is slower. > > Providing custom code for block size 11 should definitely improve the performance of all three of these. > > I note that the number of iterations 5 is much less than in the case you emailed originally? Is this really the matrix of interest? > > Barry > >> On Mar 7, 2017, at 3:26 PM, Kong, Fande wrote: >> >> >> >> On Tue, Mar 7, 2017 at 2:07 PM, Barry Smith wrote: >> >> The matrix is too small. Please post ONE big matrix >> >> I am using "-ksp_view_pmat binary" to save the matrix. How can I save the latest one only for a time-dependent problem? >> >> >> Fande, >> >> >> >>> On Mar 7, 2017, at 2:26 PM, Kong, Fande wrote: >>> >>> Uploaded to google drive, and sent you links in another email. Not sure if it works or not. >>> >>> Fande, >>> >>> On Tue, Mar 7, 2017 at 12:29 PM, Barry Smith wrote: >>> >>> It is too big for email you can post it somewhere so we can download it. >>> >>> >>>> On Mar 7, 2017, at 12:01 PM, Kong, Fande wrote: >>>> >>>> >>>> >>>> On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: >>>> I checked >>>> MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), >>>> they are virtually same. Why the version for BAIJ is so much slower? >>>> I'll investigate it. >>>> >>>> Fande, >>>> How large is your matrix? Is it possible to send us your matrix so I can test it? >>>> >>>> Thanks, Hong, >>>> >>>> It is a 3020875x3020875 matrix, and it is large. I can make a small one if you like, but not sure it will reproduce this issue or not. >>>> >>>> Fande, >>>> >>>> >>>> >>>> Hong >>>> >>>> >>>> On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith wrote: >>>> >>>> Thanks. Even the symbolic is slower for BAIJ. I don't like that, it definitely should not be since it is (at least should be) doing a symbolic factorization on a symbolic matrix 1/11th the size! >>>> >>>> Keep us informed. >>>> >>>> >>>> >>>>> On Mar 6, 2017, at 5:44 PM, Kong, Fande wrote: >>>>> >>>>> Thanks, Barry, >>>>> >>>>> Log info: >>>>> >>>>> AIJ: >>>>> >>>>> MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 >>>>> MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 >>>>> MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>>>> >>>>> BAIJ: >>>>> >>>>> MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 >>>>> MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 >>>>> MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>>>> >>>>> It looks like both MatSolve and MatLUFactorNum are slower. >>>>> >>>>> I will try your suggestions. >>>>> >>>>> Fande >>>>> >>>>> On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith wrote: >>>>> >>>>> Note also that if the 11 by 11 blocks are actually sparse (and you don't store all the zeros in the blocks in the AIJ format) then then AIJ non-block factorization involves less floating point operations and less memory access so can be faster than the BAIJ format, depending on "how sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with AIJ (with zeros maybe in certain locations) then the above is not true. >>>>> >>>>> >>>>>> On Mar 6, 2017, at 5:10 PM, Barry Smith wrote: >>>>>> >>>>>> >>>>>> This is because for block size 11 it is using calls to LAPACK/BLAS for the block operations instead of custom routines for that block size. >>>>>> >>>>>> Here is what you need to do. For a good sized case run both with -log_view and check the time spent in >>>>>> MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ and BAIJ. If they have a different number of function calls then divide by the function call count to determine the time per function call. >>>>>> >>>>>> This will tell you which routine needs to be optimized first either MatLUFactorNumeric or MatSolve. My guess is MatSolve. >>>>>> >>>>>> So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the block size of 11. >>>>>> >>>>>> Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size is 11 it uses the new routine something like. >>>>>> >>>>>> if (both_identity) { >>>>>> if (b->bs == 11) >>>>>> C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; >>>>>> } else { >>>>>> C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; >>>>>> } >>>>>> >>>>>> Rerun and look at the new -log_view. Send all three -log_view to use at this point. If this optimization helps and now >>>>>> MatLUFactorNumeric is the time sink you can do the process to MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block custom version. >>>>>> >>>>>> Barry >>>>>> >>>>>>> On Mar 6, 2017, at 4:32 PM, Kong, Fande wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan wrote: >>>>>>> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande wrote: >>>>>>>> Hi All, >>>>>>>> >>>>>>>> I am solving a nonlinear system whose Jacobian matrix has a block structure. >>>>>>>> More precisely, there is a mesh, and for each vertex there are 11 variables >>>>>>>> associated with it. I am using BAIJ. >>>>>>>> >>>>>>>> I thought block ILU(k) should be more efficient than the point-wise ILU(k). >>>>>>>> After some numerical experiments, I found that the block ILU(K) is much >>>>>>>> slower than the point-wise version. >>>>>>> Do you mean that it takes more iterations to converge, or that the >>>>>>> time per iteration is greater, or both? >>>>>>> >>>>>>> The number of iterations is very similar, but the timer per iteration is greater. >>>>>>> >>>>>>> >>>>>>>> >>>>>>>> Any thoughts? >>>>>>>> >>>>>>>> Fande, >>>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>>> >>> >>> >> >> > From fdkong.jd at gmail.com Tue Mar 7 21:44:40 2017 From: fdkong.jd at gmail.com (Fande Kong) Date: Tue, 7 Mar 2017 20:44:40 -0700 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: <1D6AF9F8-3AEF-42F0-9FF6-DDFA9B73D2E6@mcs.anl.gov> References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> <1D6AF9F8-3AEF-42F0-9FF6-DDFA9B73D2E6@mcs.anl.gov> Message-ID: On Tue, Mar 7, 2017 at 7:55 PM, Barry Smith wrote: > > I have run your larger matrix on my laptop with "default" optimization > (so --with-debugging=0) this is what I get > > ------------------------------------------------------------ > ------------------------------------------------------------ > Event Count Time (sec) Flop > --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > ------------------------------------------------------------ > ------------------------------------------------------------ > > AIJ > > MatMult 5 1.0 7.7636e-02 1.0 1.42e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 12 16 0 0 0 16 16 0 0 0 1830 > MatSolve 5 1.0 7.8164e-02 1.0 1.42e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 12 16 0 0 0 16 16 0 0 0 1818 > MatLUFactorNum 1 1.0 2.3056e-01 1.0 5.95e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 35 67 0 0 0 46 67 0 0 0 2580 > MatILUFactorSym 1 1.0 8.3201e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 13 0 0 0 0 17 0 0 0 0 0 > > BAIJ > > MatMult 5 1.0 5.3482e-02 1.0 1.42e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 6 6 0 0 0 9 6 0 0 0 2657 > MatSolve 5 1.0 6.2669e-02 1.0 1.39e+08 1.0 0.0e+00 0.0e+00 > 0.0e+00 7 6 0 0 0 11 6 0 0 0 2224 > MatLUFactorNum 1 1.0 3.7688e-01 1.0 2.12e+09 1.0 0.0e+00 0.0e+00 > 0.0e+00 40 88 0 0 0 66 88 0 0 0 5635 > MatILUFactorSym 1 1.0 4.4828e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 5 0 0 0 0 8 0 0 0 0 0 > > So BAIJ symbolic is faster (which definitely should be). BAIJ MatMult and > MatSolve are also faster, the numerical BAIJ factorization is slower. > > Providing custom code for block size 11 should definitely improve the > performance of all three of these. > > I note that the number of iterations 5 is much less than in the case you > emailed originally? Is this really the matrix of interest? > The matrix given to you is the matrix for the first nonlinear iteration of the first time step. The number of iterations in the original email is for all nonlinear iterations and all time steps. Fande, > > Barry > > > On Mar 7, 2017, at 3:26 PM, Kong, Fande wrote: > > > > > > > > On Tue, Mar 7, 2017 at 2:07 PM, Barry Smith wrote: > > > > The matrix is too small. Please post ONE big matrix > > > > I am using "-ksp_view_pmat binary" to save the matrix. How can I save > the latest one only for a time-dependent problem? > > > > > > Fande, > > > > > > > > > On Mar 7, 2017, at 2:26 PM, Kong, Fande wrote: > > > > > > Uploaded to google drive, and sent you links in another email. Not > sure if it works or not. > > > > > > Fande, > > > > > > On Tue, Mar 7, 2017 at 12:29 PM, Barry Smith > wrote: > > > > > > It is too big for email you can post it somewhere so we can > download it. > > > > > > > > > > On Mar 7, 2017, at 12:01 PM, Kong, Fande wrote: > > > > > > > > > > > > > > > > On Tue, Mar 7, 2017 at 10:23 AM, Hong wrote: > > > > I checked > > > > MatILUFactorSymbolic_SeqBAIJ() and MatILUFactorSymbolic_SeqAIJ(), > > > > they are virtually same. Why the version for BAIJ is so much slower? > > > > I'll investigate it. > > > > > > > > Fande, > > > > How large is your matrix? Is it possible to send us your matrix so I > can test it? > > > > > > > > Thanks, Hong, > > > > > > > > It is a 3020875x3020875 matrix, and it is large. I can make a small > one if you like, but not sure it will reproduce this issue or not. > > > > > > > > Fande, > > > > > > > > > > > > > > > > Hong > > > > > > > > > > > > On Mon, Mar 6, 2017 at 9:08 PM, Barry Smith > wrote: > > > > > > > > Thanks. Even the symbolic is slower for BAIJ. I don't like that, > it definitely should not be since it is (at least should be) doing a > symbolic factorization on a symbolic matrix 1/11th the size! > > > > > > > > Keep us informed. > > > > > > > > > > > > > > > > > On Mar 6, 2017, at 5:44 PM, Kong, Fande > wrote: > > > > > > > > > > Thanks, Barry, > > > > > > > > > > Log info: > > > > > > > > > > AIJ: > > > > > > > > > > MatSolve 850 1.0 8.6543e+00 4.2 3.04e+09 1.8 0.0e+00 > 0.0e+00 0.0e+00 0 41 0 0 0 0 41 0 0 0 49594 > > > > > MatLUFactorNum 25 1.0 1.7622e+00 2.0 2.04e+09 2.1 0.0e+00 > 0.0e+00 0.0e+00 0 26 0 0 0 0 26 0 0 0 153394 > > > > > MatILUFactorSym 13 1.0 2.8002e-01 2.9 0.00e+00 0.0 0.0e+00 > 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > > > > > BAIJ: > > > > > > > > > > MatSolve 826 1.0 1.3016e+01 1.7 1.42e+10 1.8 0.0e+00 > 0.0e+00 0.0e+00 1 29 0 0 0 1 29 0 0 0 154617 > > > > > MatLUFactorNum 25 1.0 1.5503e+01 2.0 3.55e+10 2.1 0.0e+00 > 0.0e+00 0.0e+00 1 67 0 0 0 1 67 0 0 0 303190 > > > > > MatILUFactorSym 13 1.0 5.7561e-01 1.8 0.00e+00 0.0 0.0e+00 > 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > > > > > > > > > > It looks like both MatSolve and MatLUFactorNum are slower. > > > > > > > > > > I will try your suggestions. > > > > > > > > > > Fande > > > > > > > > > > On Mon, Mar 6, 2017 at 4:14 PM, Barry Smith > wrote: > > > > > > > > > > Note also that if the 11 by 11 blocks are actually sparse (and > you don't store all the zeros in the blocks in the AIJ format) then then > AIJ non-block factorization involves less floating point operations and > less memory access so can be faster than the BAIJ format, depending on "how > sparse" the blocks are. If you actually "fill in" the 11 by 11 blocks with > AIJ (with zeros maybe in certain locations) then the above is not true. > > > > > > > > > > > > > > > > On Mar 6, 2017, at 5:10 PM, Barry Smith > wrote: > > > > > > > > > > > > > > > > > > This is because for block size 11 it is using calls to > LAPACK/BLAS for the block operations instead of custom routines for that > block size. > > > > > > > > > > > > Here is what you need to do. For a good sized case run both > with -log_view and check the time spent in > > > > > > MatLUFactorNumeric, MatLUFactorSymbolic and in MatSolve for AIJ > and BAIJ. If they have a different number of function calls then divide by > the function call count to determine the time per function call. > > > > > > > > > > > > This will tell you which routine needs to be optimized first > either MatLUFactorNumeric or MatSolve. My guess is MatSolve. > > > > > > > > > > > > So edit src/mat/impls/baij/seq/baijsolvnat.c and copy the > function MatSolve_SeqBAIJ_15_NaturalOrdering_ver1() to a new function > MatSolve_SeqBAIJ_11_NaturalOrdering_ver1. Edit the new function for the > block size of 11. > > > > > > > > > > > > Now edit MatLUFactorNumeric_SeqBAIJ_N() so that if block size > is 11 it uses the new routine something like. > > > > > > > > > > > > if (both_identity) { > > > > > > if (b->bs == 11) > > > > > > C->ops->solve = MatSolve_SeqBAIJ_11_NaturalOrdering_ver1; > > > > > > } else { > > > > > > C->ops->solve = MatSolve_SeqBAIJ_N_NaturalOrdering; > > > > > > } > > > > > > > > > > > > Rerun and look at the new -log_view. Send all three -log_view > to use at this point. If this optimization helps and now > > > > > > MatLUFactorNumeric is the time sink you can do the process to > MatLUFactorNumeric_SeqBAIJ_15_NaturalOrdering() to make an 11 size block > custom version. > > > > > > > > > > > > Barry > > > > > > > > > > > >> On Mar 6, 2017, at 4:32 PM, Kong, Fande > wrote: > > > > > >> > > > > > >> > > > > > >> > > > > > >> On Mon, Mar 6, 2017 at 3:27 PM, Patrick Sanan < > patrick.sanan at gmail.com> wrote: > > > > > >> On Mon, Mar 6, 2017 at 1:48 PM, Kong, Fande > wrote: > > > > > >>> Hi All, > > > > > >>> > > > > > >>> I am solving a nonlinear system whose Jacobian matrix has a > block structure. > > > > > >>> More precisely, there is a mesh, and for each vertex there are > 11 variables > > > > > >>> associated with it. I am using BAIJ. > > > > > >>> > > > > > >>> I thought block ILU(k) should be more efficient than the > point-wise ILU(k). > > > > > >>> After some numerical experiments, I found that the block > ILU(K) is much > > > > > >>> slower than the point-wise version. > > > > > >> Do you mean that it takes more iterations to converge, or that > the > > > > > >> time per iteration is greater, or both? > > > > > >> > > > > > >> The number of iterations is very similar, but the timer per > iteration is greater. > > > > > >> > > > > > >> > > > > > >>> > > > > > >>> Any thoughts? > > > > > >>> > > > > > >>> Fande, > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 7 22:54:13 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 7 Mar 2017 22:54:13 -0600 Subject: [petsc-users] block ILU(K) is slower than the point-wise version? In-Reply-To: References: <0922EC8E-4261-4618-BA6F-788C32A7E080@mcs.anl.gov> <2911F494-6901-40F3-B345-5A0D3889B37B@mcs.anl.gov> <626D4E17-7B83-44AD-BE27-7F02BDED2FC6@mcs.anl.gov> <87efy891gv.fsf@jedbrown.org> <878tog9104.fsf@jedbrown.org> Message-ID: > On Mar 7, 2017, at 9:41 PM, Fande Kong wrote: > > > > On Tue, Mar 7, 2017 at 7:37 PM, Barry Smith wrote: > > > On Mar 7, 2017, at 4:35 PM, Kong, Fande wrote: > > > > I found one issue on my side. The preallocation is not right for the BAIJ matrix. Will this slow down MatLUFactor and MatSolve? > > No, but you should still fix it. > > > > > How to converge AIJ to BAIJ using a command-line option? > > Instead of using MatCreateSeq/MPIAIJ() at the command line you would use > > MatCreate() > MatSetSizes() > MatSetBlockSize() > MatSetFromOptions() > > MatSetFromOptions() has to be called before "MatXXXXSetPreallocation"? What happens if I call MatSetFromOptions() right after "MatXXXXSetPreallocation"? To late! The type has to be set before the preallocation, otherwise the preallocation is ignored. Note there is a a MatXAIJSetPreallocation() that works for both AIJ and BAIJ matrices in one line. > > MatMPIAIJSetPreallocation() > MatMPIBAIJSetPreallocation() and any other preallocations you want > MatSetValues.....MatAssemblyBegin/End() > > Then you can use -mat_type baij or aij to set the type. > > Barry > > > > > Fande, > > > > On Tue, Mar 7, 2017 at 3:26 PM, Jed Brown wrote: > > "Kong, Fande" writes: > > > > > On Tue, Mar 7, 2017 at 3:16 PM, Jed Brown wrote: > > > > > >> Hong writes: > > >> > > >> > Fande, > > >> > Got it. Below are what I get: > > >> > > >> Is Fande using ILU(0) or ILU(k)? (And I think it should be possible to > > >> get a somewhat larger benefit.) > > >> > > > > > > > > > I am using ILU(0). Will it be much better to use ILU(k>0)? > > > > It'll be slower, but might converge faster. You asked about ILU(k) so I > > assumed you were interested in k>0. > > > > From fande.kong at inl.gov Wed Mar 8 10:26:35 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Wed, 8 Mar 2017 09:26:35 -0700 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only Message-ID: Hi All, The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? *0 Nonlinear |R| = 1.732051e+00 0 Linear |R| = 0.000000e+00 1 Linear |R| = 0.000000e+00 2 Linear |R| = 0.000000e+00 3 Linear |R| = 0.000000e+00 4 Linear |R| = 0.000000e+00 5 Linear |R| = 0.000000e+00 6 Linear |R| = 0.000000e+00 1 Nonlinear |R| = 1.769225e-08 0 Linear |R| = 0.000000e+00 1 Linear |R| = 0.000000e+00 2 Linear |R| = 0.000000e+00 3 Linear |R| = 0.000000e+00 4 Linear |R| = 0.000000e+00 5 Linear |R| = 0.000000e+00 6 Linear |R| = 0.000000e+00 7 Linear |R| = 0.000000e+00 8 Linear |R| = 0.000000e+00 9 Linear |R| = 0.000000e+00 10 Linear |R| = 0.000000e+00 2 Nonlinear |R| = 0.000000e+00SNES Object: 1 MPI processes type: newtonls maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 total number of linear solver iterations=18 total number of function evaluations=23 norm schedule ALWAYS SNESLineSearch Object: 1 MPI processes type: bt interpolation: cubic alpha=1.000000e-04 maxstep=1.000000e+08, minlambda=1.000000e-12 tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 maximum iterations=40 KSP Object: 1 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using NONE norm type for convergence test PC Object: 1 MPI processes type: hypre HYPRE BoomerAMG preconditioning HYPRE BoomerAMG: Cycle type V HYPRE BoomerAMG: Maximum number of levels 25 HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. HYPRE BoomerAMG: Threshold for strong coupling 0.25 HYPRE BoomerAMG: Interpolation truncation factor 0. HYPRE BoomerAMG: Interpolation: max elements per row 0 HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 HYPRE BoomerAMG: Maximum row sums 0.9 HYPRE BoomerAMG: Sweeps down 1 HYPRE BoomerAMG: Sweeps up 1 HYPRE BoomerAMG: Sweeps on coarse 1 HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax on coarse Gaussian-elimination HYPRE BoomerAMG: Relax weight (all) 1. HYPRE BoomerAMG: Outer relax weight (all) 1. HYPRE BoomerAMG: Using CF-relaxation HYPRE BoomerAMG: Not using more complex smoothers. HYPRE BoomerAMG: Measure type local HYPRE BoomerAMG: Coarsen type Falgout HYPRE BoomerAMG: Interpolation type classical linear system matrix followed by preconditioner matrix: Mat Object: 1 MPI processes type: mffd rows=9, cols=9 Matrix-free approximation: err=1.49012e-08 (relative error in function evaluation) Using wp compute h routine Does not compute normU Mat Object: () 1 MPI processes type: seqaij rows=9, cols=9 total: nonzeros=49, allocated nonzeros=49 total number of mallocs used during MatSetValues calls =0 not using I-node routines* Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Mar 8 10:33:55 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 8 Mar 2017 10:33:55 -0600 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: References: Message-ID: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> Please tell us how you got this output. PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. Barry > On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: > > Hi All, > > The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? > > > > 0 Nonlinear |R| = 1.732051e+00 > 0 Linear |R| = 0.000000e+00 > 1 Linear |R| = 0.000000e+00 > 2 Linear |R| = 0.000000e+00 > 3 Linear |R| = 0.000000e+00 > 4 Linear |R| = 0.000000e+00 > 5 Linear |R| = 0.000000e+00 > 6 Linear |R| = 0.000000e+00 > 1 Nonlinear |R| = 1.769225e-08 > 0 Linear |R| = 0.000000e+00 > 1 Linear |R| = 0.000000e+00 > 2 Linear |R| = 0.000000e+00 > 3 Linear |R| = 0.000000e+00 > 4 Linear |R| = 0.000000e+00 > 5 Linear |R| = 0.000000e+00 > 6 Linear |R| = 0.000000e+00 > 7 Linear |R| = 0.000000e+00 > 8 Linear |R| = 0.000000e+00 > 9 Linear |R| = 0.000000e+00 > 10 Linear |R| = 0.000000e+00 > 2 Nonlinear |R| = 0.000000e+00 > SNES Object: 1 MPI processes > type: newtonls > maximum iterations=50, maximum function evaluations=10000 > tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 > total number of linear solver iterations=18 > total number of function evaluations=23 > norm schedule ALWAYS > SNESLineSearch Object: 1 MPI processes > type: bt > interpolation: cubic > alpha=1.000000e-04 > maxstep=1.000000e+08, minlambda=1.000000e-12 > tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 > maximum iterations=40 > KSP Object: 1 MPI processes > type: cg > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > right preconditioning > using NONE norm type for convergence test > PC Object: 1 MPI processes > type: hypre > HYPRE BoomerAMG preconditioning > HYPRE BoomerAMG: Cycle type V > HYPRE BoomerAMG: Maximum number of levels 25 > HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 > HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. > HYPRE BoomerAMG: Threshold for strong coupling 0.25 > HYPRE BoomerAMG: Interpolation truncation factor 0. > HYPRE BoomerAMG: Interpolation: max elements per row 0 > HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 > HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 > HYPRE BoomerAMG: Maximum row sums 0.9 > HYPRE BoomerAMG: Sweeps down 1 > HYPRE BoomerAMG: Sweeps up 1 > HYPRE BoomerAMG: Sweeps on coarse 1 > HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi > HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi > HYPRE BoomerAMG: Relax on coarse Gaussian-elimination > HYPRE BoomerAMG: Relax weight (all) 1. > HYPRE BoomerAMG: Outer relax weight (all) 1. > HYPRE BoomerAMG: Using CF-relaxation > HYPRE BoomerAMG: Not using more complex smoothers. > HYPRE BoomerAMG: Measure type local > HYPRE BoomerAMG: Coarsen type Falgout > HYPRE BoomerAMG: Interpolation type classical > linear system matrix followed by preconditioner matrix: > Mat Object: 1 MPI processes > type: mffd > rows=9, cols=9 > Matrix-free approximation: > err=1.49012e-08 (relative error in function evaluation) > Using wp compute h routine > Does not compute normU > Mat Object: () 1 MPI processes > type: seqaij > rows=9, cols=9 > total: nonzeros=49, allocated nonzeros=49 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > > Fande, > From fande.kong at inl.gov Wed Mar 8 10:47:54 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Wed, 8 Mar 2017 09:47:54 -0700 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> Message-ID: Thanks Barry, We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? Fande, On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: > > Please tell us how you got this output. > > PETSc CG doesn't even implement right preconditioning. If you ask for it > it should error out. CG supports no norm computation with left > preconditioning. > > Barry > > > On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: > > > > Hi All, > > > > The NONE norm type is supported only when CG is used with a right > preconditioner. Any reason for this? > > > > > > > > 0 Nonlinear |R| = 1.732051e+00 > > 0 Linear |R| = 0.000000e+00 > > 1 Linear |R| = 0.000000e+00 > > 2 Linear |R| = 0.000000e+00 > > 3 Linear |R| = 0.000000e+00 > > 4 Linear |R| = 0.000000e+00 > > 5 Linear |R| = 0.000000e+00 > > 6 Linear |R| = 0.000000e+00 > > 1 Nonlinear |R| = 1.769225e-08 > > 0 Linear |R| = 0.000000e+00 > > 1 Linear |R| = 0.000000e+00 > > 2 Linear |R| = 0.000000e+00 > > 3 Linear |R| = 0.000000e+00 > > 4 Linear |R| = 0.000000e+00 > > 5 Linear |R| = 0.000000e+00 > > 6 Linear |R| = 0.000000e+00 > > 7 Linear |R| = 0.000000e+00 > > 8 Linear |R| = 0.000000e+00 > > 9 Linear |R| = 0.000000e+00 > > 10 Linear |R| = 0.000000e+00 > > 2 Nonlinear |R| = 0.000000e+00 > > SNES Object: 1 MPI processes > > type: newtonls > > maximum iterations=50, maximum function evaluations=10000 > > tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 > > total number of linear solver iterations=18 > > total number of function evaluations=23 > > norm schedule ALWAYS > > SNESLineSearch Object: 1 MPI processes > > type: bt > > interpolation: cubic > > alpha=1.000000e-04 > > maxstep=1.000000e+08, minlambda=1.000000e-12 > > tolerances: relative=1.000000e-08, absolute=1.000000e-15, > lambda=1.000000e-08 > > maximum iterations=40 > > KSP Object: 1 MPI processes > > type: cg > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > right preconditioning > > using NONE norm type for convergence test > > PC Object: 1 MPI processes > > type: hypre > > HYPRE BoomerAMG preconditioning > > HYPRE BoomerAMG: Cycle type V > > HYPRE BoomerAMG: Maximum number of levels 25 > > HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 > > HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. > > HYPRE BoomerAMG: Threshold for strong coupling 0.25 > > HYPRE BoomerAMG: Interpolation truncation factor 0. > > HYPRE BoomerAMG: Interpolation: max elements per row 0 > > HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 > > HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 > > HYPRE BoomerAMG: Maximum row sums 0.9 > > HYPRE BoomerAMG: Sweeps down 1 > > HYPRE BoomerAMG: Sweeps up 1 > > HYPRE BoomerAMG: Sweeps on coarse 1 > > HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi > > HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi > > HYPRE BoomerAMG: Relax on coarse Gaussian-elimination > > HYPRE BoomerAMG: Relax weight (all) 1. > > HYPRE BoomerAMG: Outer relax weight (all) 1. > > HYPRE BoomerAMG: Using CF-relaxation > > HYPRE BoomerAMG: Not using more complex smoothers. > > HYPRE BoomerAMG: Measure type local > > HYPRE BoomerAMG: Coarsen type Falgout > > HYPRE BoomerAMG: Interpolation type classical > > linear system matrix followed by preconditioner matrix: > > Mat Object: 1 MPI processes > > type: mffd > > rows=9, cols=9 > > Matrix-free approximation: > > err=1.49012e-08 (relative error in function evaluation) > > Using wp compute h routine > > Does not compute normU > > Mat Object: () 1 MPI processes > > type: seqaij > > rows=9, cols=9 > > total: nonzeros=49, allocated nonzeros=49 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node routines > > > > Fande, > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 8 11:19:15 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Mar 2017 11:19:15 -0600 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> Message-ID: On Wed, Mar 8, 2017 at 10:47 AM, Kong, Fande wrote: > Thanks Barry, > > We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried > "-ksp_pc_side right", and petsc did not error out. > > I like to understand why CG does not work with right preconditioning? > Mathematically, the right preconditioning does not make sense? > cd src/snes/examples/tutorials knepley/feature-plasma-example $:/PETSc3/petsc/petsc-dev/src/snes/examples/tutorials$ ./ex5 -ksp_view -ksp_type cg -ksp_pc_side right -ksp_error_if_not_converged KSP Object: 1 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using NONE norm type for convergence test PC Object: 1 MPI processes type: ilu out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 matrix ordering: natural factor fill ratio given 1., needed 1. Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=16, cols=16 package used to perform factorization: petsc total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=16, cols=16 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls =0 not using I-node routines [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: [0]PETSC ERROR: KSPSolve has not converged [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3127-ge9f6087 GIT Date: 2017-02-11 13:06:34 -0600 [0]PETSC ERROR: ./ex5 on a arch-c-exodus-master named MATTHEW-KNEPLEYs-MacBook-Air-2.local by knepley Wed Mar 8 11:17:43 2017 [0]PETSC ERROR: Configure options --PETSC_ARCH=arch-c-exodus-master --download-chaco --download-cmake --download-ctetgen --download-eigen --download-exodusii --download-gmp --download-hdf5 --download-metis --download-mpfr --download-mpich --download-netcdf --download-p4est --download-parmetis --download-pragmatic --download-triangle --useThreads=1 --with-cc="/Users/knepley/MacSoftware/bin/ccache gcc -Qunused-arguments" --with-cxx="/Users/knepley/MacSoftware/bin/ccache g++ -Qunused-arguments" --with-fc="/Users/knepley/MacSoftware/bin/ccache gfortran" --with-shared-libraries [0]PETSC ERROR: #1 KSPSolve() line 847 in /PETSc3/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #2 SNESSolve_NEWTONLS() line 224 in /PETSc3/petsc/petsc-dev/src/snes/impls/ls/ls.c [0]PETSC ERROR: #3 SNESSolve() line 3967 in /PETSc3/petsc/petsc-dev/src/snes/interface/snes.c [0]PETSC ERROR: #4 main() line 187 in /PETSc3/petsc/petsc-dev/src/snes/examples/tutorials/ex5.c [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -ksp_error_if_not_converged [0]PETSC ERROR: -ksp_pc_side right [0]PETSC ERROR: -ksp_type cg [0]PETSC ERROR: -ksp_view [0]PETSC ERROR: -malloc_test [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- So we are not getting an error Matt > > Fande, > > On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: > >> >> Please tell us how you got this output. >> >> PETSc CG doesn't even implement right preconditioning. If you ask for >> it it should error out. CG supports no norm computation with left >> preconditioning. >> >> Barry >> >> > On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: >> > >> > Hi All, >> > >> > The NONE norm type is supported only when CG is used with a right >> preconditioner. Any reason for this? >> > >> > >> > >> > 0 Nonlinear |R| = 1.732051e+00 >> > 0 Linear |R| = 0.000000e+00 >> > 1 Linear |R| = 0.000000e+00 >> > 2 Linear |R| = 0.000000e+00 >> > 3 Linear |R| = 0.000000e+00 >> > 4 Linear |R| = 0.000000e+00 >> > 5 Linear |R| = 0.000000e+00 >> > 6 Linear |R| = 0.000000e+00 >> > 1 Nonlinear |R| = 1.769225e-08 >> > 0 Linear |R| = 0.000000e+00 >> > 1 Linear |R| = 0.000000e+00 >> > 2 Linear |R| = 0.000000e+00 >> > 3 Linear |R| = 0.000000e+00 >> > 4 Linear |R| = 0.000000e+00 >> > 5 Linear |R| = 0.000000e+00 >> > 6 Linear |R| = 0.000000e+00 >> > 7 Linear |R| = 0.000000e+00 >> > 8 Linear |R| = 0.000000e+00 >> > 9 Linear |R| = 0.000000e+00 >> > 10 Linear |R| = 0.000000e+00 >> > 2 Nonlinear |R| = 0.000000e+00 >> > SNES Object: 1 MPI processes >> > type: newtonls >> > maximum iterations=50, maximum function evaluations=10000 >> > tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 >> > total number of linear solver iterations=18 >> > total number of function evaluations=23 >> > norm schedule ALWAYS >> > SNESLineSearch Object: 1 MPI processes >> > type: bt >> > interpolation: cubic >> > alpha=1.000000e-04 >> > maxstep=1.000000e+08, minlambda=1.000000e-12 >> > tolerances: relative=1.000000e-08, absolute=1.000000e-15, >> lambda=1.000000e-08 >> > maximum iterations=40 >> > KSP Object: 1 MPI processes >> > type: cg >> > maximum iterations=10000, initial guess is zero >> > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >> > right preconditioning >> > using NONE norm type for convergence test >> > PC Object: 1 MPI processes >> > type: hypre >> > HYPRE BoomerAMG preconditioning >> > HYPRE BoomerAMG: Cycle type V >> > HYPRE BoomerAMG: Maximum number of levels 25 >> > HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 >> > HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. >> > HYPRE BoomerAMG: Threshold for strong coupling 0.25 >> > HYPRE BoomerAMG: Interpolation truncation factor 0. >> > HYPRE BoomerAMG: Interpolation: max elements per row 0 >> > HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 >> > HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 >> > HYPRE BoomerAMG: Maximum row sums 0.9 >> > HYPRE BoomerAMG: Sweeps down 1 >> > HYPRE BoomerAMG: Sweeps up 1 >> > HYPRE BoomerAMG: Sweeps on coarse 1 >> > HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi >> > HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi >> > HYPRE BoomerAMG: Relax on coarse Gaussian-elimination >> > HYPRE BoomerAMG: Relax weight (all) 1. >> > HYPRE BoomerAMG: Outer relax weight (all) 1. >> > HYPRE BoomerAMG: Using CF-relaxation >> > HYPRE BoomerAMG: Not using more complex smoothers. >> > HYPRE BoomerAMG: Measure type local >> > HYPRE BoomerAMG: Coarsen type Falgout >> > HYPRE BoomerAMG: Interpolation type classical >> > linear system matrix followed by preconditioner matrix: >> > Mat Object: 1 MPI processes >> > type: mffd >> > rows=9, cols=9 >> > Matrix-free approximation: >> > err=1.49012e-08 (relative error in function evaluation) >> > Using wp compute h routine >> > Does not compute normU >> > Mat Object: () 1 MPI processes >> > type: seqaij >> > rows=9, cols=9 >> > total: nonzeros=49, allocated nonzeros=49 >> > total number of mallocs used during MatSetValues calls =0 >> > not using I-node routines >> > >> > Fande, >> > >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Mar 8 13:10:07 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 8 Mar 2017 13:10:07 -0600 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> Message-ID: We should be getting an error. > On Mar 8, 2017, at 11:19 AM, Matthew Knepley wrote: > > On Wed, Mar 8, 2017 at 10:47 AM, Kong, Fande wrote: > Thanks Barry, > > We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. > > I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? > > cd src/snes/examples/tutorials > knepley/feature-plasma-example $:/PETSc3/petsc/petsc-dev/src/snes/examples/tutorials$ ./ex5 -ksp_view -ksp_type cg -ksp_pc_side right -ksp_error_if_not_converged > KSP Object: 1 MPI processes > type: cg > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > right preconditioning > using NONE norm type for convergence test > PC Object: 1 MPI processes > type: ilu > out-of-place factorization > 0 levels of fill > tolerance for zero pivot 2.22045e-14 > matrix ordering: natural > factor fill ratio given 1., needed 1. > Factored matrix follows: > Mat Object: 1 MPI processes > type: seqaij > rows=16, cols=16 > package used to perform factorization: petsc > total: nonzeros=64, allocated nonzeros=64 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=16, cols=16 > total: nonzeros=64, allocated nonzeros=64 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: > [0]PETSC ERROR: KSPSolve has not converged > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3127-ge9f6087 GIT Date: 2017-02-11 13:06:34 -0600 > [0]PETSC ERROR: ./ex5 on a arch-c-exodus-master named MATTHEW-KNEPLEYs-MacBook-Air-2.local by knepley Wed Mar 8 11:17:43 2017 > [0]PETSC ERROR: Configure options --PETSC_ARCH=arch-c-exodus-master --download-chaco --download-cmake --download-ctetgen --download-eigen --download-exodusii --download-gmp --download-hdf5 --download-metis --download-mpfr --download-mpich --download-netcdf --download-p4est --download-parmetis --download-pragmatic --download-triangle --useThreads=1 --with-cc="/Users/knepley/MacSoftware/bin/ccache gcc -Qunused-arguments" --with-cxx="/Users/knepley/MacSoftware/bin/ccache g++ -Qunused-arguments" --with-fc="/Users/knepley/MacSoftware/bin/ccache gfortran" --with-shared-libraries > [0]PETSC ERROR: #1 KSPSolve() line 847 in /PETSc3/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #2 SNESSolve_NEWTONLS() line 224 in /PETSc3/petsc/petsc-dev/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #3 SNESSolve() line 3967 in /PETSc3/petsc/petsc-dev/src/snes/interface/snes.c > [0]PETSC ERROR: #4 main() line 187 in /PETSc3/petsc/petsc-dev/src/snes/examples/tutorials/ex5.c > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -ksp_error_if_not_converged > [0]PETSC ERROR: -ksp_pc_side right > [0]PETSC ERROR: -ksp_type cg > [0]PETSC ERROR: -ksp_view > [0]PETSC ERROR: -malloc_test > [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > > So we are not getting an error > > Matt > > > Fande, > > On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: > > Please tell us how you got this output. > > PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. > > Barry > > > On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: > > > > Hi All, > > > > The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? > > > > > > > > 0 Nonlinear |R| = 1.732051e+00 > > 0 Linear |R| = 0.000000e+00 > > 1 Linear |R| = 0.000000e+00 > > 2 Linear |R| = 0.000000e+00 > > 3 Linear |R| = 0.000000e+00 > > 4 Linear |R| = 0.000000e+00 > > 5 Linear |R| = 0.000000e+00 > > 6 Linear |R| = 0.000000e+00 > > 1 Nonlinear |R| = 1.769225e-08 > > 0 Linear |R| = 0.000000e+00 > > 1 Linear |R| = 0.000000e+00 > > 2 Linear |R| = 0.000000e+00 > > 3 Linear |R| = 0.000000e+00 > > 4 Linear |R| = 0.000000e+00 > > 5 Linear |R| = 0.000000e+00 > > 6 Linear |R| = 0.000000e+00 > > 7 Linear |R| = 0.000000e+00 > > 8 Linear |R| = 0.000000e+00 > > 9 Linear |R| = 0.000000e+00 > > 10 Linear |R| = 0.000000e+00 > > 2 Nonlinear |R| = 0.000000e+00 > > SNES Object: 1 MPI processes > > type: newtonls > > maximum iterations=50, maximum function evaluations=10000 > > tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 > > total number of linear solver iterations=18 > > total number of function evaluations=23 > > norm schedule ALWAYS > > SNESLineSearch Object: 1 MPI processes > > type: bt > > interpolation: cubic > > alpha=1.000000e-04 > > maxstep=1.000000e+08, minlambda=1.000000e-12 > > tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 > > maximum iterations=40 > > KSP Object: 1 MPI processes > > type: cg > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > right preconditioning > > using NONE norm type for convergence test > > PC Object: 1 MPI processes > > type: hypre > > HYPRE BoomerAMG preconditioning > > HYPRE BoomerAMG: Cycle type V > > HYPRE BoomerAMG: Maximum number of levels 25 > > HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 > > HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. > > HYPRE BoomerAMG: Threshold for strong coupling 0.25 > > HYPRE BoomerAMG: Interpolation truncation factor 0. > > HYPRE BoomerAMG: Interpolation: max elements per row 0 > > HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 > > HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 > > HYPRE BoomerAMG: Maximum row sums 0.9 > > HYPRE BoomerAMG: Sweeps down 1 > > HYPRE BoomerAMG: Sweeps up 1 > > HYPRE BoomerAMG: Sweeps on coarse 1 > > HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi > > HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi > > HYPRE BoomerAMG: Relax on coarse Gaussian-elimination > > HYPRE BoomerAMG: Relax weight (all) 1. > > HYPRE BoomerAMG: Outer relax weight (all) 1. > > HYPRE BoomerAMG: Using CF-relaxation > > HYPRE BoomerAMG: Not using more complex smoothers. > > HYPRE BoomerAMG: Measure type local > > HYPRE BoomerAMG: Coarsen type Falgout > > HYPRE BoomerAMG: Interpolation type classical > > linear system matrix followed by preconditioner matrix: > > Mat Object: 1 MPI processes > > type: mffd > > rows=9, cols=9 > > Matrix-free approximation: > > err=1.49012e-08 (relative error in function evaluation) > > Using wp compute h routine > > Does not compute normU > > Mat Object: () 1 MPI processes > > type: seqaij > > rows=9, cols=9 > > total: nonzeros=49, allocated nonzeros=49 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node routines > > > > Fande, > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From bsmith at mcs.anl.gov Wed Mar 8 13:12:08 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 8 Mar 2017 13:12:08 -0600 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> Message-ID: > On Mar 8, 2017, at 10:47 AM, Kong, Fande wrote: > > Thanks Barry, > > We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. > > I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? No, mathematically it makes sense to do it on the right. It is just that the PETSc code was never written to support it on the right. One reason is that CG is interesting that you can run with the true residual or the preconditioned residual with left preconditioning, hence less incentive to ever bother writing it to support right preconditioning. For completeness we should support right as well as symmetric. Barry > > Fande, > > On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: > > Please tell us how you got this output. > > PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. > > Barry > > > On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: > > > > Hi All, > > > > The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? > > > > > > > > 0 Nonlinear |R| = 1.732051e+00 > > 0 Linear |R| = 0.000000e+00 > > 1 Linear |R| = 0.000000e+00 > > 2 Linear |R| = 0.000000e+00 > > 3 Linear |R| = 0.000000e+00 > > 4 Linear |R| = 0.000000e+00 > > 5 Linear |R| = 0.000000e+00 > > 6 Linear |R| = 0.000000e+00 > > 1 Nonlinear |R| = 1.769225e-08 > > 0 Linear |R| = 0.000000e+00 > > 1 Linear |R| = 0.000000e+00 > > 2 Linear |R| = 0.000000e+00 > > 3 Linear |R| = 0.000000e+00 > > 4 Linear |R| = 0.000000e+00 > > 5 Linear |R| = 0.000000e+00 > > 6 Linear |R| = 0.000000e+00 > > 7 Linear |R| = 0.000000e+00 > > 8 Linear |R| = 0.000000e+00 > > 9 Linear |R| = 0.000000e+00 > > 10 Linear |R| = 0.000000e+00 > > 2 Nonlinear |R| = 0.000000e+00 > > SNES Object: 1 MPI processes > > type: newtonls > > maximum iterations=50, maximum function evaluations=10000 > > tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 > > total number of linear solver iterations=18 > > total number of function evaluations=23 > > norm schedule ALWAYS > > SNESLineSearch Object: 1 MPI processes > > type: bt > > interpolation: cubic > > alpha=1.000000e-04 > > maxstep=1.000000e+08, minlambda=1.000000e-12 > > tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 > > maximum iterations=40 > > KSP Object: 1 MPI processes > > type: cg > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > right preconditioning > > using NONE norm type for convergence test > > PC Object: 1 MPI processes > > type: hypre > > HYPRE BoomerAMG preconditioning > > HYPRE BoomerAMG: Cycle type V > > HYPRE BoomerAMG: Maximum number of levels 25 > > HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 > > HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. > > HYPRE BoomerAMG: Threshold for strong coupling 0.25 > > HYPRE BoomerAMG: Interpolation truncation factor 0. > > HYPRE BoomerAMG: Interpolation: max elements per row 0 > > HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 > > HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 > > HYPRE BoomerAMG: Maximum row sums 0.9 > > HYPRE BoomerAMG: Sweeps down 1 > > HYPRE BoomerAMG: Sweeps up 1 > > HYPRE BoomerAMG: Sweeps on coarse 1 > > HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi > > HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi > > HYPRE BoomerAMG: Relax on coarse Gaussian-elimination > > HYPRE BoomerAMG: Relax weight (all) 1. > > HYPRE BoomerAMG: Outer relax weight (all) 1. > > HYPRE BoomerAMG: Using CF-relaxation > > HYPRE BoomerAMG: Not using more complex smoothers. > > HYPRE BoomerAMG: Measure type local > > HYPRE BoomerAMG: Coarsen type Falgout > > HYPRE BoomerAMG: Interpolation type classical > > linear system matrix followed by preconditioner matrix: > > Mat Object: 1 MPI processes > > type: mffd > > rows=9, cols=9 > > Matrix-free approximation: > > err=1.49012e-08 (relative error in function evaluation) > > Using wp compute h routine > > Does not compute normU > > Mat Object: () 1 MPI processes > > type: seqaij > > rows=9, cols=9 > > total: nonzeros=49, allocated nonzeros=49 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node routines > > > > Fande, > > > > From bsmith at mcs.anl.gov Wed Mar 8 16:12:16 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 8 Mar 2017 16:12:16 -0600 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> Message-ID: <2D3A900F-30E4-40C5-AF86-A3D124205A7E@mcs.anl.gov> Jed, This seems wrong. It is called in KSPCreate() and seems to imply all KSP methods support no-norm with right preconditioning (or left for that matter). But CG doesn't support right period and FGMES does not support left at all. Shouldn't those two lines be removed (and maybe they need to be added in the create for certain KSP methods). PetscErrorCode KSPNormSupportTableReset_Private(KSP ksp) { PetscErrorCode ierr; PetscFunctionBegin; ierr = PetscMemzero(ksp->normsupporttable,sizeof(ksp->normsupporttable));CHKERRQ(ierr); ierr = KSPSetSupportedNorm(ksp,KSP_NORM_NONE,PC_LEFT,1);CHKERRQ(ierr); ierr = KSPSetSupportedNorm(ksp,KSP_NORM_NONE,PC_RIGHT,1);CHKERRQ(ierr); ksp->pc_side = ksp->pc_side_set; ksp->normtype = ksp->normtype_set; PetscFunctionReturn(0); } > On Mar 8, 2017, at 11:19 AM, Matthew Knepley wrote: > > On Wed, Mar 8, 2017 at 10:47 AM, Kong, Fande wrote: > Thanks Barry, > > We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. > > I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? > > cd src/snes/examples/tutorials > knepley/feature-plasma-example $:/PETSc3/petsc/petsc-dev/src/snes/examples/tutorials$ ./ex5 -ksp_view -ksp_type cg -ksp_pc_side right -ksp_error_if_not_converged > KSP Object: 1 MPI processes > type: cg > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > right preconditioning > using NONE norm type for convergence test > PC Object: 1 MPI processes > type: ilu > out-of-place factorization > 0 levels of fill > tolerance for zero pivot 2.22045e-14 > matrix ordering: natural > factor fill ratio given 1., needed 1. > Factored matrix follows: > Mat Object: 1 MPI processes > type: seqaij > rows=16, cols=16 > package used to perform factorization: petsc > total: nonzeros=64, allocated nonzeros=64 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=16, cols=16 > total: nonzeros=64, allocated nonzeros=64 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: > [0]PETSC ERROR: KSPSolve has not converged > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3127-ge9f6087 GIT Date: 2017-02-11 13:06:34 -0600 > [0]PETSC ERROR: ./ex5 on a arch-c-exodus-master named MATTHEW-KNEPLEYs-MacBook-Air-2.local by knepley Wed Mar 8 11:17:43 2017 > [0]PETSC ERROR: Configure options --PETSC_ARCH=arch-c-exodus-master --download-chaco --download-cmake --download-ctetgen --download-eigen --download-exodusii --download-gmp --download-hdf5 --download-metis --download-mpfr --download-mpich --download-netcdf --download-p4est --download-parmetis --download-pragmatic --download-triangle --useThreads=1 --with-cc="/Users/knepley/MacSoftware/bin/ccache gcc -Qunused-arguments" --with-cxx="/Users/knepley/MacSoftware/bin/ccache g++ -Qunused-arguments" --with-fc="/Users/knepley/MacSoftware/bin/ccache gfortran" --with-shared-libraries > [0]PETSC ERROR: #1 KSPSolve() line 847 in /PETSc3/petsc/petsc-dev/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #2 SNESSolve_NEWTONLS() line 224 in /PETSc3/petsc/petsc-dev/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #3 SNESSolve() line 3967 in /PETSc3/petsc/petsc-dev/src/snes/interface/snes.c > [0]PETSC ERROR: #4 main() line 187 in /PETSc3/petsc/petsc-dev/src/snes/examples/tutorials/ex5.c > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -ksp_error_if_not_converged > [0]PETSC ERROR: -ksp_pc_side right > [0]PETSC ERROR: -ksp_type cg > [0]PETSC ERROR: -ksp_view > [0]PETSC ERROR: -malloc_test > [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > > So we are not getting an error > > Matt > > > Fande, > > On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: > > Please tell us how you got this output. > > PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. > > Barry > > > On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: > > > > Hi All, > > > > The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? > > > > > > > > 0 Nonlinear |R| = 1.732051e+00 > > 0 Linear |R| = 0.000000e+00 > > 1 Linear |R| = 0.000000e+00 > > 2 Linear |R| = 0.000000e+00 > > 3 Linear |R| = 0.000000e+00 > > 4 Linear |R| = 0.000000e+00 > > 5 Linear |R| = 0.000000e+00 > > 6 Linear |R| = 0.000000e+00 > > 1 Nonlinear |R| = 1.769225e-08 > > 0 Linear |R| = 0.000000e+00 > > 1 Linear |R| = 0.000000e+00 > > 2 Linear |R| = 0.000000e+00 > > 3 Linear |R| = 0.000000e+00 > > 4 Linear |R| = 0.000000e+00 > > 5 Linear |R| = 0.000000e+00 > > 6 Linear |R| = 0.000000e+00 > > 7 Linear |R| = 0.000000e+00 > > 8 Linear |R| = 0.000000e+00 > > 9 Linear |R| = 0.000000e+00 > > 10 Linear |R| = 0.000000e+00 > > 2 Nonlinear |R| = 0.000000e+00 > > SNES Object: 1 MPI processes > > type: newtonls > > maximum iterations=50, maximum function evaluations=10000 > > tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 > > total number of linear solver iterations=18 > > total number of function evaluations=23 > > norm schedule ALWAYS > > SNESLineSearch Object: 1 MPI processes > > type: bt > > interpolation: cubic > > alpha=1.000000e-04 > > maxstep=1.000000e+08, minlambda=1.000000e-12 > > tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 > > maximum iterations=40 > > KSP Object: 1 MPI processes > > type: cg > > maximum iterations=10000, initial guess is zero > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > right preconditioning > > using NONE norm type for convergence test > > PC Object: 1 MPI processes > > type: hypre > > HYPRE BoomerAMG preconditioning > > HYPRE BoomerAMG: Cycle type V > > HYPRE BoomerAMG: Maximum number of levels 25 > > HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 > > HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. > > HYPRE BoomerAMG: Threshold for strong coupling 0.25 > > HYPRE BoomerAMG: Interpolation truncation factor 0. > > HYPRE BoomerAMG: Interpolation: max elements per row 0 > > HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 > > HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 > > HYPRE BoomerAMG: Maximum row sums 0.9 > > HYPRE BoomerAMG: Sweeps down 1 > > HYPRE BoomerAMG: Sweeps up 1 > > HYPRE BoomerAMG: Sweeps on coarse 1 > > HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi > > HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi > > HYPRE BoomerAMG: Relax on coarse Gaussian-elimination > > HYPRE BoomerAMG: Relax weight (all) 1. > > HYPRE BoomerAMG: Outer relax weight (all) 1. > > HYPRE BoomerAMG: Using CF-relaxation > > HYPRE BoomerAMG: Not using more complex smoothers. > > HYPRE BoomerAMG: Measure type local > > HYPRE BoomerAMG: Coarsen type Falgout > > HYPRE BoomerAMG: Interpolation type classical > > linear system matrix followed by preconditioner matrix: > > Mat Object: 1 MPI processes > > type: mffd > > rows=9, cols=9 > > Matrix-free approximation: > > err=1.49012e-08 (relative error in function evaluation) > > Using wp compute h routine > > Does not compute normU > > Mat Object: () 1 MPI processes > > type: seqaij > > rows=9, cols=9 > > total: nonzeros=49, allocated nonzeros=49 > > total number of mallocs used during MatSetValues calls =0 > > not using I-node routines > > > > Fande, > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener From a.croucher at auckland.ac.nz Wed Mar 8 16:24:23 2017 From: a.croucher at auckland.ac.nz (Adrian Croucher) Date: Thu, 9 Mar 2017 11:24:23 +1300 Subject: [petsc-users] SNES for 1D problem? Message-ID: <308dc468-1b2c-533f-4800-37f6c08600e8@auckland.ac.nz> hi If part of my code needs a one-dimensional nonlinear solver (i.e. to solve for a single scalar variable), which for some problems will be called many times, does it still make sense to use SNES for that? or are there overheads from the vector machinery that make it less efficient for that case than rolling your own simple solver? - Adrian -- Dr Adrian Croucher Senior Research Fellow Department of Engineering Science University of Auckland, New Zealand email: a.croucher at auckland.ac.nz tel: +64 (0)9 923 4611 From bsmith at mcs.anl.gov Wed Mar 8 16:32:20 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 8 Mar 2017 16:32:20 -0600 Subject: [petsc-users] SNES for 1D problem? In-Reply-To: <308dc468-1b2c-533f-4800-37f6c08600e8@auckland.ac.nz> References: <308dc468-1b2c-533f-4800-37f6c08600e8@auckland.ac.nz> Message-ID: > On Mar 8, 2017, at 4:24 PM, Adrian Croucher wrote: > > hi > > If part of my code needs a one-dimensional nonlinear solver (i.e. to solve for a single scalar variable), which for some problems will be called many times, does it still make sense to use SNES for that? or are there overheads from the vector machinery that make it less efficient for that case than rolling your own simple solver? Should be ok if you are using the same SNES object. If you create the SNES object fresh for each one; no reason you would do this; that might be noticeable. Barry > > - Adrian > > -- > Dr Adrian Croucher > Senior Research Fellow > Department of Engineering Science > University of Auckland, New Zealand > email: a.croucher at auckland.ac.nz > tel: +64 (0)9 923 4611 > From bsmith at mcs.anl.gov Wed Mar 8 16:55:51 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 8 Mar 2017 16:55:51 -0600 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> Message-ID: <45638E32-8243-493E-9D92-5838DD9BE5EF@mcs.anl.gov> A proposed fix https://bitbucket.org/petsc/petsc/pull-requests/645/do-not-assume-that-all-ksp-methods-support Needs Jed's approval. Barry > On Mar 8, 2017, at 10:33 AM, Barry Smith wrote: > > > Please tell us how you got this output. > > PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. > > Barry > >> On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: >> >> Hi All, >> >> The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? >> >> >> >> 0 Nonlinear |R| = 1.732051e+00 >> 0 Linear |R| = 0.000000e+00 >> 1 Linear |R| = 0.000000e+00 >> 2 Linear |R| = 0.000000e+00 >> 3 Linear |R| = 0.000000e+00 >> 4 Linear |R| = 0.000000e+00 >> 5 Linear |R| = 0.000000e+00 >> 6 Linear |R| = 0.000000e+00 >> 1 Nonlinear |R| = 1.769225e-08 >> 0 Linear |R| = 0.000000e+00 >> 1 Linear |R| = 0.000000e+00 >> 2 Linear |R| = 0.000000e+00 >> 3 Linear |R| = 0.000000e+00 >> 4 Linear |R| = 0.000000e+00 >> 5 Linear |R| = 0.000000e+00 >> 6 Linear |R| = 0.000000e+00 >> 7 Linear |R| = 0.000000e+00 >> 8 Linear |R| = 0.000000e+00 >> 9 Linear |R| = 0.000000e+00 >> 10 Linear |R| = 0.000000e+00 >> 2 Nonlinear |R| = 0.000000e+00 >> SNES Object: 1 MPI processes >> type: newtonls >> maximum iterations=50, maximum function evaluations=10000 >> tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 >> total number of linear solver iterations=18 >> total number of function evaluations=23 >> norm schedule ALWAYS >> SNESLineSearch Object: 1 MPI processes >> type: bt >> interpolation: cubic >> alpha=1.000000e-04 >> maxstep=1.000000e+08, minlambda=1.000000e-12 >> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 >> maximum iterations=40 >> KSP Object: 1 MPI processes >> type: cg >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >> right preconditioning >> using NONE norm type for convergence test >> PC Object: 1 MPI processes >> type: hypre >> HYPRE BoomerAMG preconditioning >> HYPRE BoomerAMG: Cycle type V >> HYPRE BoomerAMG: Maximum number of levels 25 >> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 >> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. >> HYPRE BoomerAMG: Threshold for strong coupling 0.25 >> HYPRE BoomerAMG: Interpolation truncation factor 0. >> HYPRE BoomerAMG: Interpolation: max elements per row 0 >> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 >> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 >> HYPRE BoomerAMG: Maximum row sums 0.9 >> HYPRE BoomerAMG: Sweeps down 1 >> HYPRE BoomerAMG: Sweeps up 1 >> HYPRE BoomerAMG: Sweeps on coarse 1 >> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi >> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi >> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination >> HYPRE BoomerAMG: Relax weight (all) 1. >> HYPRE BoomerAMG: Outer relax weight (all) 1. >> HYPRE BoomerAMG: Using CF-relaxation >> HYPRE BoomerAMG: Not using more complex smoothers. >> HYPRE BoomerAMG: Measure type local >> HYPRE BoomerAMG: Coarsen type Falgout >> HYPRE BoomerAMG: Interpolation type classical >> linear system matrix followed by preconditioner matrix: >> Mat Object: 1 MPI processes >> type: mffd >> rows=9, cols=9 >> Matrix-free approximation: >> err=1.49012e-08 (relative error in function evaluation) >> Using wp compute h routine >> Does not compute normU >> Mat Object: () 1 MPI processes >> type: seqaij >> rows=9, cols=9 >> total: nonzeros=49, allocated nonzeros=49 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> >> Fande, >> > From fande.kong at inl.gov Wed Mar 8 17:02:49 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Wed, 8 Mar 2017 16:02:49 -0700 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: <45638E32-8243-493E-9D92-5838DD9BE5EF@mcs.anl.gov> References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> <45638E32-8243-493E-9D92-5838DD9BE5EF@mcs.anl.gov> Message-ID: Thanks, Barry. Fande, On Wed, Mar 8, 2017 at 3:55 PM, Barry Smith wrote: > > A proposed fix https://urldefense.proofpoint.com/v2/url?u=https-3A__ > bitbucket.org_petsc_petsc_pull-2Drequests_645_do-2Dnot- > 2Dassume-2Dthat-2Dall-2Dksp-2Dmethods-2Dsupport&d=DQIFAg&c= > 54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=DUUt3SRGI0_ > JgtNaS3udV68GRkgV4ts7XKfj2opmiCY&m=RbF_pG6G05IcrxiELCCV36C6Cb_ > GqQZ7H84RH1hRQik&s=p1nuatzGn2KrF98argO7-qTt4U64Rzny3KoN-IJLOv4&e= > > Needs Jed's approval. > > Barry > > > > > On Mar 8, 2017, at 10:33 AM, Barry Smith wrote: > > > > > > Please tell us how you got this output. > > > > PETSc CG doesn't even implement right preconditioning. If you ask for > it it should error out. CG supports no norm computation with left > preconditioning. > > > > Barry > > > >> On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: > >> > >> Hi All, > >> > >> The NONE norm type is supported only when CG is used with a right > preconditioner. Any reason for this? > >> > >> > >> > >> 0 Nonlinear |R| = 1.732051e+00 > >> 0 Linear |R| = 0.000000e+00 > >> 1 Linear |R| = 0.000000e+00 > >> 2 Linear |R| = 0.000000e+00 > >> 3 Linear |R| = 0.000000e+00 > >> 4 Linear |R| = 0.000000e+00 > >> 5 Linear |R| = 0.000000e+00 > >> 6 Linear |R| = 0.000000e+00 > >> 1 Nonlinear |R| = 1.769225e-08 > >> 0 Linear |R| = 0.000000e+00 > >> 1 Linear |R| = 0.000000e+00 > >> 2 Linear |R| = 0.000000e+00 > >> 3 Linear |R| = 0.000000e+00 > >> 4 Linear |R| = 0.000000e+00 > >> 5 Linear |R| = 0.000000e+00 > >> 6 Linear |R| = 0.000000e+00 > >> 7 Linear |R| = 0.000000e+00 > >> 8 Linear |R| = 0.000000e+00 > >> 9 Linear |R| = 0.000000e+00 > >> 10 Linear |R| = 0.000000e+00 > >> 2 Nonlinear |R| = 0.000000e+00 > >> SNES Object: 1 MPI processes > >> type: newtonls > >> maximum iterations=50, maximum function evaluations=10000 > >> tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 > >> total number of linear solver iterations=18 > >> total number of function evaluations=23 > >> norm schedule ALWAYS > >> SNESLineSearch Object: 1 MPI processes > >> type: bt > >> interpolation: cubic > >> alpha=1.000000e-04 > >> maxstep=1.000000e+08, minlambda=1.000000e-12 > >> tolerances: relative=1.000000e-08, absolute=1.000000e-15, > lambda=1.000000e-08 > >> maximum iterations=40 > >> KSP Object: 1 MPI processes > >> type: cg > >> maximum iterations=10000, initial guess is zero > >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > >> right preconditioning > >> using NONE norm type for convergence test > >> PC Object: 1 MPI processes > >> type: hypre > >> HYPRE BoomerAMG preconditioning > >> HYPRE BoomerAMG: Cycle type V > >> HYPRE BoomerAMG: Maximum number of levels 25 > >> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 > >> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. > >> HYPRE BoomerAMG: Threshold for strong coupling 0.25 > >> HYPRE BoomerAMG: Interpolation truncation factor 0. > >> HYPRE BoomerAMG: Interpolation: max elements per row 0 > >> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 > >> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 > >> HYPRE BoomerAMG: Maximum row sums 0.9 > >> HYPRE BoomerAMG: Sweeps down 1 > >> HYPRE BoomerAMG: Sweeps up 1 > >> HYPRE BoomerAMG: Sweeps on coarse 1 > >> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi > >> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi > >> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination > >> HYPRE BoomerAMG: Relax weight (all) 1. > >> HYPRE BoomerAMG: Outer relax weight (all) 1. > >> HYPRE BoomerAMG: Using CF-relaxation > >> HYPRE BoomerAMG: Not using more complex smoothers. > >> HYPRE BoomerAMG: Measure type local > >> HYPRE BoomerAMG: Coarsen type Falgout > >> HYPRE BoomerAMG: Interpolation type classical > >> linear system matrix followed by preconditioner matrix: > >> Mat Object: 1 MPI processes > >> type: mffd > >> rows=9, cols=9 > >> Matrix-free approximation: > >> err=1.49012e-08 (relative error in function evaluation) > >> Using wp compute h routine > >> Does not compute normU > >> Mat Object: () 1 MPI processes > >> type: seqaij > >> rows=9, cols=9 > >> total: nonzeros=49, allocated nonzeros=49 > >> total number of mallocs used during MatSetValues calls =0 > >> not using I-node routines > >> > >> Fande, > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Mar 8 18:12:36 2017 From: jed at jedbrown.org (Jed Brown) Date: Wed, 08 Mar 2017 17:12:36 -0700 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: <45638E32-8243-493E-9D92-5838DD9BE5EF@mcs.anl.gov> References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> <45638E32-8243-493E-9D92-5838DD9BE5EF@mcs.anl.gov> Message-ID: <87shmn5mvf.fsf@jedbrown.org> Looks fine to me. Thanks. Barry Smith writes: > A proposed fix https://bitbucket.org/petsc/petsc/pull-requests/645/do-not-assume-that-all-ksp-methods-support > > Needs Jed's approval. > > Barry > > > >> On Mar 8, 2017, at 10:33 AM, Barry Smith wrote: >> >> >> Please tell us how you got this output. >> >> PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. >> >> Barry >> >>> On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: >>> >>> Hi All, >>> >>> The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? >>> >>> >>> >>> 0 Nonlinear |R| = 1.732051e+00 >>> 0 Linear |R| = 0.000000e+00 >>> 1 Linear |R| = 0.000000e+00 >>> 2 Linear |R| = 0.000000e+00 >>> 3 Linear |R| = 0.000000e+00 >>> 4 Linear |R| = 0.000000e+00 >>> 5 Linear |R| = 0.000000e+00 >>> 6 Linear |R| = 0.000000e+00 >>> 1 Nonlinear |R| = 1.769225e-08 >>> 0 Linear |R| = 0.000000e+00 >>> 1 Linear |R| = 0.000000e+00 >>> 2 Linear |R| = 0.000000e+00 >>> 3 Linear |R| = 0.000000e+00 >>> 4 Linear |R| = 0.000000e+00 >>> 5 Linear |R| = 0.000000e+00 >>> 6 Linear |R| = 0.000000e+00 >>> 7 Linear |R| = 0.000000e+00 >>> 8 Linear |R| = 0.000000e+00 >>> 9 Linear |R| = 0.000000e+00 >>> 10 Linear |R| = 0.000000e+00 >>> 2 Nonlinear |R| = 0.000000e+00 >>> SNES Object: 1 MPI processes >>> type: newtonls >>> maximum iterations=50, maximum function evaluations=10000 >>> tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 >>> total number of linear solver iterations=18 >>> total number of function evaluations=23 >>> norm schedule ALWAYS >>> SNESLineSearch Object: 1 MPI processes >>> type: bt >>> interpolation: cubic >>> alpha=1.000000e-04 >>> maxstep=1.000000e+08, minlambda=1.000000e-12 >>> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 >>> maximum iterations=40 >>> KSP Object: 1 MPI processes >>> type: cg >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>> right preconditioning >>> using NONE norm type for convergence test >>> PC Object: 1 MPI processes >>> type: hypre >>> HYPRE BoomerAMG preconditioning >>> HYPRE BoomerAMG: Cycle type V >>> HYPRE BoomerAMG: Maximum number of levels 25 >>> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 >>> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. >>> HYPRE BoomerAMG: Threshold for strong coupling 0.25 >>> HYPRE BoomerAMG: Interpolation truncation factor 0. >>> HYPRE BoomerAMG: Interpolation: max elements per row 0 >>> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 >>> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 >>> HYPRE BoomerAMG: Maximum row sums 0.9 >>> HYPRE BoomerAMG: Sweeps down 1 >>> HYPRE BoomerAMG: Sweeps up 1 >>> HYPRE BoomerAMG: Sweeps on coarse 1 >>> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi >>> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi >>> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination >>> HYPRE BoomerAMG: Relax weight (all) 1. >>> HYPRE BoomerAMG: Outer relax weight (all) 1. >>> HYPRE BoomerAMG: Using CF-relaxation >>> HYPRE BoomerAMG: Not using more complex smoothers. >>> HYPRE BoomerAMG: Measure type local >>> HYPRE BoomerAMG: Coarsen type Falgout >>> HYPRE BoomerAMG: Interpolation type classical >>> linear system matrix followed by preconditioner matrix: >>> Mat Object: 1 MPI processes >>> type: mffd >>> rows=9, cols=9 >>> Matrix-free approximation: >>> err=1.49012e-08 (relative error in function evaluation) >>> Using wp compute h routine >>> Does not compute normU >>> Mat Object: () 1 MPI processes >>> type: seqaij >>> rows=9, cols=9 >>> total: nonzeros=49, allocated nonzeros=49 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> >>> Fande, >>> >> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From patrick.sanan at gmail.com Wed Mar 8 19:37:26 2017 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Wed, 8 Mar 2017 17:37:26 -0800 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> Message-ID: On Wed, Mar 8, 2017 at 11:12 AM, Barry Smith wrote: > >> On Mar 8, 2017, at 10:47 AM, Kong, Fande wrote: >> >> Thanks Barry, >> >> We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. >> >> I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? > > No, mathematically it makes sense to do it on the right. It is just that the PETSc code was never written to support it on the right. One reason is that CG is interesting that you can run with the true residual or the preconditioned residual with left preconditioning, hence less incentive to ever bother writing it to support right preconditioning. For completeness we should support right as well as symmetric. For standard CG preconditioning, which PETSc calls left preconditioning, you use a s.p.d. preconditioner M to define an inner product in the algorithm, and end up finding iterates x_k in K_k(MA; Mb). That isn't quite the same as left-preconditioned GMRES, where you apply standard GMRES to the equivalent system MAx=Mb, and also end up finding iterates in K_k(MA,Mb). This wouldn't work for CG because MA isn't s.p.d. in general, even if M and A are. Standard CG preconditioning is often motivated as a clever way to do symmetric preconditioning, E^TAEy = E^Tb, x=Ey, without ever needing E explicitly, using only M=EE^T . y_k lives in K_k(E^TAE,E^Tb) and thus x_k again lives in K_k(MA;Mb). Thus, it's not clear that there is an candidate for a right-preconditioned CG variant, as what PETSc calls "left" preconditioning doesn't arise in the same way that it does for other Krylov methods, namely using the standard algorithm on MAx=Mb. For GMRES you would get a right-preconditioned variant by looking at the transformed system AMy=b, x = My. This means that y_k lives in K_k(AM,b), so x lives in K_k(MA,Mb), as before. For CG, AM wouldn't be spd in general so this approach wouldn't make sense. Another way to look at the difference in "left" preconditioning between GMRES and CG is that - introducing left preconditioning for GMRES alters both the Krylov subspaces *and* the optimality condition: you go from minimizing || b - Ax_k ||_2 over K_k(A;b) to minimizing || M (b-Ax_k) ||_2 over K_k(MA;Mb). - introducing "left" preconditioning for CG alters *only* the Krylov subspaces: you always minimize || x - x_k ||_A , but change the space from K_k(A;b) to K_k(MA;Mb). Thus, my impression is that it's misleading to call standard CG preconditioning "left" preconditioning in PETSc - someone might think of GMRES and naturally ask why there is no right preconditioning. One might define a new entry in PCSide to be used with CG and friends. I can't think of any slam dunk suggestions yet, but something in the genre of PC_INNERPRODUCT, PC_METRIC, PC_CG, or PC_IMPLICITSYMMETRIC, perhaps. > > Barry > >> >> Fande, >> >> On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: >> >> Please tell us how you got this output. >> >> PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. >> >> Barry >> >> > On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: >> > >> > Hi All, >> > >> > The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? >> > >> > >> > >> > 0 Nonlinear |R| = 1.732051e+00 >> > 0 Linear |R| = 0.000000e+00 >> > 1 Linear |R| = 0.000000e+00 >> > 2 Linear |R| = 0.000000e+00 >> > 3 Linear |R| = 0.000000e+00 >> > 4 Linear |R| = 0.000000e+00 >> > 5 Linear |R| = 0.000000e+00 >> > 6 Linear |R| = 0.000000e+00 >> > 1 Nonlinear |R| = 1.769225e-08 >> > 0 Linear |R| = 0.000000e+00 >> > 1 Linear |R| = 0.000000e+00 >> > 2 Linear |R| = 0.000000e+00 >> > 3 Linear |R| = 0.000000e+00 >> > 4 Linear |R| = 0.000000e+00 >> > 5 Linear |R| = 0.000000e+00 >> > 6 Linear |R| = 0.000000e+00 >> > 7 Linear |R| = 0.000000e+00 >> > 8 Linear |R| = 0.000000e+00 >> > 9 Linear |R| = 0.000000e+00 >> > 10 Linear |R| = 0.000000e+00 >> > 2 Nonlinear |R| = 0.000000e+00 >> > SNES Object: 1 MPI processes >> > type: newtonls >> > maximum iterations=50, maximum function evaluations=10000 >> > tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 >> > total number of linear solver iterations=18 >> > total number of function evaluations=23 >> > norm schedule ALWAYS >> > SNESLineSearch Object: 1 MPI processes >> > type: bt >> > interpolation: cubic >> > alpha=1.000000e-04 >> > maxstep=1.000000e+08, minlambda=1.000000e-12 >> > tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 >> > maximum iterations=40 >> > KSP Object: 1 MPI processes >> > type: cg >> > maximum iterations=10000, initial guess is zero >> > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >> > right preconditioning >> > using NONE norm type for convergence test >> > PC Object: 1 MPI processes >> > type: hypre >> > HYPRE BoomerAMG preconditioning >> > HYPRE BoomerAMG: Cycle type V >> > HYPRE BoomerAMG: Maximum number of levels 25 >> > HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 >> > HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. >> > HYPRE BoomerAMG: Threshold for strong coupling 0.25 >> > HYPRE BoomerAMG: Interpolation truncation factor 0. >> > HYPRE BoomerAMG: Interpolation: max elements per row 0 >> > HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 >> > HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 >> > HYPRE BoomerAMG: Maximum row sums 0.9 >> > HYPRE BoomerAMG: Sweeps down 1 >> > HYPRE BoomerAMG: Sweeps up 1 >> > HYPRE BoomerAMG: Sweeps on coarse 1 >> > HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi >> > HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi >> > HYPRE BoomerAMG: Relax on coarse Gaussian-elimination >> > HYPRE BoomerAMG: Relax weight (all) 1. >> > HYPRE BoomerAMG: Outer relax weight (all) 1. >> > HYPRE BoomerAMG: Using CF-relaxation >> > HYPRE BoomerAMG: Not using more complex smoothers. >> > HYPRE BoomerAMG: Measure type local >> > HYPRE BoomerAMG: Coarsen type Falgout >> > HYPRE BoomerAMG: Interpolation type classical >> > linear system matrix followed by preconditioner matrix: >> > Mat Object: 1 MPI processes >> > type: mffd >> > rows=9, cols=9 >> > Matrix-free approximation: >> > err=1.49012e-08 (relative error in function evaluation) >> > Using wp compute h routine >> > Does not compute normU >> > Mat Object: () 1 MPI processes >> > type: seqaij >> > rows=9, cols=9 >> > total: nonzeros=49, allocated nonzeros=49 >> > total number of mallocs used during MatSetValues calls =0 >> > not using I-node routines >> > >> > Fande, >> > >> >> > From bsmith at mcs.anl.gov Wed Mar 8 20:57:22 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 8 Mar 2017 20:57:22 -0600 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> Message-ID: <6475B307-B5BC-493D-BF9F-FDA19C2BB932@mcs.anl.gov> Patrick, Thanks, this is interesting, we should try to add material to the KSPCG page to capture some of the subtleties that I admit after 28 years I still don't really understand. The paper A TAXONOMY FOR CONJUGATE GRADIENT METHODS (attached) has some interesting discussion (particularly page 1548 "Therefore, only left preconditioning need be considered: Right preconditioning may be effected by incorporating it into the left preconditioner and inner product." I don't know exactly what this means in practical terms in respect to code. (In PETSc KSP we don't explicitly talk about, or use a separate "inner product" in the Krylov methods, we only have the concept of operator and preconditioner operator.) Remembering vaguely, perhaps incorrectly, from a real long time ago "left preconditioning with a spd M is just the unpreconditioned cg in the M inner product" while "right preconditioning with M is unpreconditioned cg in a M^-1 inner product". If this is correct then it implies (I think) that right preconditioned CG would produce a different set of iterates than left preconditioning and hence is "missing" in terms of completeness from PETSc KSP. Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: 0727091.pdf Type: application/pdf Size: 3382522 bytes Desc: not available URL: -------------- next part -------------- > On Mar 8, 2017, at 7:37 PM, Patrick Sanan wrote: > > On Wed, Mar 8, 2017 at 11:12 AM, Barry Smith wrote: >> >>> On Mar 8, 2017, at 10:47 AM, Kong, Fande wrote: >>> >>> Thanks Barry, >>> >>> We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. >>> >>> I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? >> >> No, mathematically it makes sense to do it on the right. It is just that the PETSc code was never written to support it on the right. One reason is that CG is interesting that you can run with the true residual or the preconditioned residual with left preconditioning, hence less incentive to ever bother writing it to support right preconditioning. For completeness we should support right as well as symmetric. > > For standard CG preconditioning, which PETSc calls left > preconditioning, you use a s.p.d. preconditioner M to define an inner > product in the algorithm, and end up finding iterates x_k in K_k(MA; > Mb). That isn't quite the same as left-preconditioned GMRES, where you > apply standard GMRES to the equivalent system MAx=Mb, and also end up > finding iterates in K_k(MA,Mb). This wouldn't work for CG because MA > isn't s.p.d. in general, even if M and A are. > > Standard CG preconditioning is often motivated as a clever way to do > symmetric preconditioning, E^TAEy = E^Tb, x=Ey, without ever needing E > explicitly, using only M=EE^T . y_k lives in K_k(E^TAE,E^Tb) and thus > x_k again lives in K_k(MA;Mb). > > Thus, it's not clear that there is an candidate for a > right-preconditioned CG variant, as what PETSc calls "left" > preconditioning doesn't arise in the same way that it does for other > Krylov methods, namely using the standard algorithm on MAx=Mb. For > GMRES you would get a right-preconditioned variant by looking at the > transformed system AMy=b, x = My. This means that y_k lives in > K_k(AM,b), so x lives in K_k(MA,Mb), as before. For CG, AM wouldn't be > spd in general so this approach wouldn't make sense. > > Another way to look at the difference in "left" preconditioning > between GMRES and CG is that > > - introducing left preconditioning for GMRES alters both the Krylov > subspaces *and* the optimality condition: you go from minimizing || b > - Ax_k ||_2 over K_k(A;b) to minimizing || M (b-Ax_k) ||_2 over > K_k(MA;Mb). > > - introducing "left" preconditioning for CG alters *only* the Krylov > subspaces: you always minimize || x - x_k ||_A , but change the space > from K_k(A;b) to K_k(MA;Mb). > > Thus, my impression is that it's misleading to call standard CG > preconditioning "left" preconditioning in PETSc - someone might think > of GMRES and naturally ask why there is no right preconditioning. > > One might define a new entry in PCSide to be used with CG and friends. > I can't think of any slam dunk suggestions yet, but something in the > genre of PC_INNERPRODUCT, PC_METRIC, PC_CG, or PC_IMPLICITSYMMETRIC, > perhaps. > > >> >> Barry >> >>> >>> Fande, >>> >>> On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: >>> >>> Please tell us how you got this output. >>> >>> PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. >>> >>> Barry >>> >>>> On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: >>>> >>>> Hi All, >>>> >>>> The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? >>>> >>>> >>>> >>>> 0 Nonlinear |R| = 1.732051e+00 >>>> 0 Linear |R| = 0.000000e+00 >>>> 1 Linear |R| = 0.000000e+00 >>>> 2 Linear |R| = 0.000000e+00 >>>> 3 Linear |R| = 0.000000e+00 >>>> 4 Linear |R| = 0.000000e+00 >>>> 5 Linear |R| = 0.000000e+00 >>>> 6 Linear |R| = 0.000000e+00 >>>> 1 Nonlinear |R| = 1.769225e-08 >>>> 0 Linear |R| = 0.000000e+00 >>>> 1 Linear |R| = 0.000000e+00 >>>> 2 Linear |R| = 0.000000e+00 >>>> 3 Linear |R| = 0.000000e+00 >>>> 4 Linear |R| = 0.000000e+00 >>>> 5 Linear |R| = 0.000000e+00 >>>> 6 Linear |R| = 0.000000e+00 >>>> 7 Linear |R| = 0.000000e+00 >>>> 8 Linear |R| = 0.000000e+00 >>>> 9 Linear |R| = 0.000000e+00 >>>> 10 Linear |R| = 0.000000e+00 >>>> 2 Nonlinear |R| = 0.000000e+00 >>>> SNES Object: 1 MPI processes >>>> type: newtonls >>>> maximum iterations=50, maximum function evaluations=10000 >>>> tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 >>>> total number of linear solver iterations=18 >>>> total number of function evaluations=23 >>>> norm schedule ALWAYS >>>> SNESLineSearch Object: 1 MPI processes >>>> type: bt >>>> interpolation: cubic >>>> alpha=1.000000e-04 >>>> maxstep=1.000000e+08, minlambda=1.000000e-12 >>>> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 >>>> maximum iterations=40 >>>> KSP Object: 1 MPI processes >>>> type: cg >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>>> right preconditioning >>>> using NONE norm type for convergence test >>>> PC Object: 1 MPI processes >>>> type: hypre >>>> HYPRE BoomerAMG preconditioning >>>> HYPRE BoomerAMG: Cycle type V >>>> HYPRE BoomerAMG: Maximum number of levels 25 >>>> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 >>>> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. >>>> HYPRE BoomerAMG: Threshold for strong coupling 0.25 >>>> HYPRE BoomerAMG: Interpolation truncation factor 0. >>>> HYPRE BoomerAMG: Interpolation: max elements per row 0 >>>> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 >>>> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 >>>> HYPRE BoomerAMG: Maximum row sums 0.9 >>>> HYPRE BoomerAMG: Sweeps down 1 >>>> HYPRE BoomerAMG: Sweeps up 1 >>>> HYPRE BoomerAMG: Sweeps on coarse 1 >>>> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi >>>> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi >>>> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination >>>> HYPRE BoomerAMG: Relax weight (all) 1. >>>> HYPRE BoomerAMG: Outer relax weight (all) 1. >>>> HYPRE BoomerAMG: Using CF-relaxation >>>> HYPRE BoomerAMG: Not using more complex smoothers. >>>> HYPRE BoomerAMG: Measure type local >>>> HYPRE BoomerAMG: Coarsen type Falgout >>>> HYPRE BoomerAMG: Interpolation type classical >>>> linear system matrix followed by preconditioner matrix: >>>> Mat Object: 1 MPI processes >>>> type: mffd >>>> rows=9, cols=9 >>>> Matrix-free approximation: >>>> err=1.49012e-08 (relative error in function evaluation) >>>> Using wp compute h routine >>>> Does not compute normU >>>> Mat Object: () 1 MPI processes >>>> type: seqaij >>>> rows=9, cols=9 >>>> total: nonzeros=49, allocated nonzeros=49 >>>> total number of mallocs used during MatSetValues calls =0 >>>> not using I-node routines >>>> >>>> Fande, From C.Klaij at marin.nl Thu Mar 9 01:49:57 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 9 Mar 2017 07:49:57 +0000 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only Message-ID: <1489045797753.85105@marin.nl> Barry, I came across the same problem and decided to use KSPSetNormType instead of KSPSetPCSide. Do I understand correctly that CG with KSP_NORM_UNPRECONDITIONED would be as efficient as with KSP_NORM_PRECONDITIONED? Since PC_RIGHT is not supported, I was under the impression that the former would basically be the latter with an additional true residual evaluation for the convergence monitor, which would be less efficient. Chris > On Mar 8, 2017, at 10:47 AM, Kong, Fande wrote: > > Thanks Barry, > > We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. > > I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? No, mathematically it makes sense to do it on the right. It is just that the PETSc code was never written to support it on the right. One reason is that CG is interesting that you can run with the true residual or the preconditioned residual with left preconditioning, hence less incentive to ever bother writing it to support right preconditioning. For completeness we should support right as well as symmetric. Barry dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/Your-future-is-Blue-Blueweek-April-1013.htm From imilian.hartig at gmail.com Thu Mar 9 07:45:58 2017 From: imilian.hartig at gmail.com (Maximilian Hartig) Date: Thu, 9 Mar 2017 14:45:58 +0100 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> <69E95D27-72CD-4BFA-AC62-907727998DCB@gmail.com> <3383F285-34C3-48A2-BF7C-56ED80FDF236@gmail.com> Message-ID: Ok thank you, so can I just do something along the lines of: PetscSectionGetFieldConstraintDof(?) to find the overconstrained vertices and then correct them manually with PetscSectionSetFieldConstraintDof() PetscSectionSetFieldConstraintIndices() ? Or will this mess up the Jacobian and the iFunction? Thanks, Max > On 7 Mar 2017, at 18:21, Matthew Knepley wrote: > > On Tue, Mar 7, 2017 at 11:11 AM, Maximilian Hartig > wrote: > >> On 7 Mar 2017, at 16:29, Matthew Knepley > wrote: >> >> On Tue, Mar 7, 2017 at 3:28 AM, Maximilian Hartig > wrote: >> It seems you are correct. In theory, the problem should not be over constrained. It is 1/4 of a simple hollow cylinder geometry with rotational symmetry around the z-axis. I restrict movement completely on the upper and lower (z) end as well as movement in x- and y- direction respectively on the symmetry planes. >> I am not completely sure what I am looking at with the output of -dm_petscsection_view. But these lines struck me as odd: >> >> >> (5167) dim 3 offset 0 constrained 0 1 1 2 >> (5168) dim 3 offset 6 constrained 0 1 1 2 >> . >> . >> . >> (5262) dim 3 offset 0 constrained 0 0 1 2 >> (5263) dim 3 offset 6 constrained 0 0 1 2 >> >> >> It seems that vertices that are part of the closures of both Face Sets get restricted twice in their respective degree of freedom. >> >> Yes, that is exactly what happens. >> >> This does however also happen when restricting movement in x- direction only for upper and lower faces. In that case without the solver producing an error: >> (20770) dim 3 offset 24 constrained 0 0 >> (20771) dim 3 offset 30 constrained 0 0 >> (20772) dim 3 offset 36 constrained 0 0 >> (20773) dim 3 offset 42 constrained 0 0 >> >> The fact that this does not SEGV is just luck. >> >> Now, I did not put in any guard against this because I was not sure what should happen. We could error if a local index is repeated, or >> we could ignore it. This seems unsafe if you try to constrain it with two different values, but there is no way for me to tell if the values are >> compatible. Thus I just believed whatever the user told me. >> >> What is the intention here? It would be straightforward to ignore duplicates I guess. > > Yes, ignoring duplicates would solve the problem then. I can think of no example where imposing two different Dirichlet BC on the same DOF of the same vertex would make sense (I might be wrong of course). That means the only issue is to determine wether the first or the second BC is the correct one to be imposed. > I don?t know how I could filter out the vertices in question from the Label. I use GMSH to construct my meshes and could create a label for the edges without too much effort. But I cannot see an easy way to exclude them when imposing the BC. > I tried to figure out where PETSC actually imposes the BC but got lost a bit in the source. Could you kindly point me towards the location? > > It is in stages. > > 1) You make a structure with AddBoundary() that has a Label and function for boundary values > > 2) The PetscSection gets created with stores which points have constraints and which components they affect > > 3) When global Vecs are made, these constraints are left out > > 4) When local Vecs are made, they are left in > > 5) DMPlexInsertBoundaryValues() is called on local Vecs, and puts in the values from your functions. This usually happens > when you copy the solutions values from the global Vec to a local Vec to being assembly. > > Thanks, > > Matt > > Thanks, > Max > > >> >> Thanks, >> >> Matt >> >> Thanks, >> Max >> >>> On 6 Mar 2017, at 14:43, Matthew Knepley > wrote: >>> >>> On Mon, Mar 6, 2017 at 8:38 AM, Maximilian Hartig > wrote: >>> Of course, please find the source as well as the mesh attached below. I run with: >>> >>> -def_petscspace_order 2 -vel_petscspace_order 2 -snes_monitor -snes_converged_reason -ksp_converged_reason -ksp_monitor _true_residual -ksp_type fgmres -pc_type sor >>> >>> This sounds like over-constraining a point to me. I will try and run it soon, but I have a full schedule this week. The easiest >>> way to see if this is happening should be to print out the Section that gets made >>> >>> -dm_petscsection_view >>> >>> Thanks, >>> >>> Matt >>> >>> Thanks, >>> Max >>> >>> >>> >>> >>>> On 4 Mar 2017, at 11:34, Sander Arens > wrote: >>>> >>>> Hmm, strange you also get the error in serial. Can you maybe send a minimal working which demonstrates the error? >>>> >>>> Thanks, >>>> Sander >>>> >>>> On 3 March 2017 at 23:07, Maximilian Hartig > wrote: >>>> Yes Sander, your assessment is correct. I use DMPlex and specify the BC using DMLabel. I do however get this error also when running in serial. >>>> >>>> Thanks, >>>> Max >>>> >>>>> On 3 Mar 2017, at 22:14, Matthew Knepley > wrote: >>>>> >>>>> On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens > wrote: >>>>> Max, >>>>> >>>>> I'm assuming you use DMPlex for your mesh? If so, did you only specify the faces in the DMLabel (and not vertices or edges). Do you get this error only in parallel? >>>>> >>>>> If so, I can confirm this bug. I submitted a pull request for this yesterday. >>>>> >>>>> Yep, I saw Sander's pull request. I will get in merged in tomorrow when I get home to Houston. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> On 3 March 2017 at 18:43, Lukas van de Wiel > wrote: >>>>> You have apparently preallocated the non-zeroes of you matrix, and the room was insufficient to accommodate all your equations. >>>>> >>>>> What happened after you tried: >>>>> >>>>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>>>> >>>>> >>>>> Cheers >>>>> Lukas >>>>> >>>>> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig > wrote: >>>>> Hello, >>>>> >>>>> I am working on a transient structural FEM code with PETSc. I managed to create a slow but functioning program with the use of petscFE and a TS solver. The code runs fine until I try to restrict movement in all three spatial directions for one face. I then get the error which is attached below. >>>>> So apparently DMPlexMatSetClosure tries to write/read beyond what was priorly allocated. I do however not call MatSeqAIJSetPreallocation myself in the code. So I?m unsure where to start looking for the bug. In my understanding, PETSc should know from the DM how much space to allocate. >>>>> Could you kindly give me a hint? >>>>> >>>>> Thanks, >>>>> >>>>> Max >>>>> >>>>> 0 SNES Function norm 2.508668036663e-06 >>>>> [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- >>>>> [0]PETSC ERROR: Argument out of range >>>>> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >>>>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn off this check >>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>>>> [0]PETSC ERROR: Petsc Development GIT revision: v3.7.5-3223-g99077fc GIT Date: 2017-02-28 13:41:43 -0600 >>>>> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by hartig Fri Mar 3 17:55:57 2017 >>>>> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort --download-ml >>>>> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >>>>> [0]PETSC ERROR: #2 MatSetValues() line 1270 in /home/hartig/petsc/src/mat/interface/matrix.c >>>>> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >>>>> [0]mat for sieve point 60 >>>>> [0]mat row indices[0] = 41754 >>>>> [0]mat row indices[1] = 41755 >>>>> [0]mat row indices[2] = 41756 >>>>> [0]mat row indices[3] = 41760 >>>>> [0]mat row indices[4] = 41761 >>>>> [0]mat row indices[5] = 41762 >>>>> [0]mat row indices[6] = 41766 >>>>> [0]mat row indices[7] = -41768 >>>>> [0]mat row indices[8] = 41767 >>>>> [0]mat row indices[9] = 41771 >>>>> [0]mat row indices[10] = -41773 >>>>> [0]mat row indices[11] = 41772 >>>>> [0]mat row indices[12] = 41776 >>>>> [0]mat row indices[13] = 41777 >>>>> [0]mat row indices[14] = 41778 >>>>> [0]mat row indices[15] = 41782 >>>>> [0]mat row indices[16] = -41784 >>>>> [0]mat row indices[17] = 41783 >>>>> [0]mat row indices[18] = 261 >>>>> [0]mat row indices[19] = -263 >>>>> [0]mat row indices[20] = 262 >>>>> [0]mat row indices[21] = 24318 >>>>> [0]mat row indices[22] = 24319 >>>>> [0]mat row indices[23] = 24320 >>>>> [0]mat row indices[24] = -7 >>>>> [0]mat row indices[25] = -8 >>>>> [0]mat row indices[26] = 6 >>>>> [0]mat row indices[27] = 1630 >>>>> [0]mat row indices[28] = -1632 >>>>> [0]mat row indices[29] = 1631 >>>>> [0]mat row indices[30] = 41757 >>>>> [0]mat row indices[31] = 41758 >>>>> [0]mat row indices[32] = 41759 >>>>> [0]mat row indices[33] = 41763 >>>>> [0]mat row indices[34] = 41764 >>>>> [0]mat row indices[35] = 41765 >>>>> [0]mat row indices[36] = 41768 >>>>> [0]mat row indices[37] = 41769 >>>>> [0]mat row indices[38] = 41770 >>>>> [0]mat row indices[39] = 41773 >>>>> [0]mat row indices[40] = 41774 >>>>> [0]mat row indices[41] = 41775 >>>>> [0]mat row indices[42] = 41779 >>>>> [0]mat row indices[43] = 41780 >>>>> [0]mat row indices[44] = 41781 >>>>> [0]mat row indices[45] = 41784 >>>>> [0]mat row indices[46] = 41785 >>>>> [0]mat row indices[47] = 41786 >>>>> [0]mat row indices[48] = 263 >>>>> [0]mat row indices[49] = 264 >>>>> [0]mat row indices[50] = 265 >>>>> [0]mat row indices[51] = 24321 >>>>> [0]mat row indices[52] = 24322 >>>>> [0]mat row indices[53] = 24323 >>>>> [0]mat row indices[54] = 5 >>>>> [0]mat row indices[55] = 6 >>>>> [0]mat row indices[56] = 7 >>>>> [0]mat row indices[57] = 1632 >>>>> [0]mat row indices[58] = 1633 >>>>> [0]mat row indices[59] = 1634 >>>>> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>>> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >>>>> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 0.165135 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 -0.0418706 -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 -0.0117288 -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 -0.0674185 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>>> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>>> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 -2.37741 -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 -0.107528 0.11009 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 0.158254 0.00657075 -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 -0.0983732 0.0825675 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>>>> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >>>>> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 -0.0117288 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 0.0658628 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 0.197005 -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 -0.00531859 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>>>> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. >>>>> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 -0.0239537 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 0.038569 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 -0.045728 -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 0.0141688 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >>>>> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 -0.0674185 -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 0.0412838 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 -0.00531859 -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 0.0412838 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >>>>> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. >>>>> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 0.000426133 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 0.115568 -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >>>>> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 0.085579 -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 0. 0. -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >>>>> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. >>>>> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 >>>>> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >>>>> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. >>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 >>>>> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >>>>> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. >>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 >>>>> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. >>>>> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 >>>>> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 >>>>> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. >>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 >>>>> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. >>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 >>>>> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. >>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 >>>>> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in /home/hartig/petsc/src/dm/impls/plex/plex.c >>>>> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in /home/hartig/petsc/src/snes/utils/dmplexsnes.c >>>>> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in /home/hartig/petsc/src/ts/utils/dmplexts.c >>>>> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in /home/hartig/petsc/src/ts/utils/dmlocalts.c >>>>> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in /home/hartig/petsc/src/ts/interface/ts.c >>>>> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in /home/hartig/petsc/src/ts/interface/ts.c >>>>> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in /home/hartig/petsc/src/snes/interface/snes.c >>>>> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in /home/hartig/petsc/src/snes/impls/ls/ls.c >>>>> [0]PETSC ERROR: #12 SNESSolve() line 3967 in /home/hartig/petsc/src/snes/interface/snes.c >>>>> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>> [0]PETSC ERROR: #15 TSStep() line 3809 in /home/hartig/petsc/src/ts/interface/ts.c >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>>>> -- Norbert Wiener >>>> >>>> >>> >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 9 08:00:30 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 9 Mar 2017 08:00:30 -0600 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> <69E95D27-72CD-4BFA-AC62-907727998DCB@gmail.com> <3383F285-34C3-48A2-BF7C-56ED80FDF236@gmail.com> Message-ID: On Thu, Mar 9, 2017 at 7:45 AM, Maximilian Hartig wrote: > Ok thank you, so can I just do something along the lines of: > PetscSectionGetFieldConstraintDof(?) > to find the overconstrained vertices and then correct them manually with > PetscSectionSetFieldConstraintDof() > PetscSectionSetFieldConstraintIndices() > ? > You can do exactly that. And that is the same thing I would do, although I would probably go to the place where they are being input https://bitbucket.org/petsc/petsc/src/79f3641cdf8f54d0fc7a5ae1e04e0887d8c00e9b/src/dm/impls/plex/plex.c?at=master&fileviewer=file-view-default#plex.c-3164 and try to filter them out. However, its a little tricky since the space has already been allocated and will have to be adjusted. It will likely take a more thorough going rewrite to do it. Or will this mess up the Jacobian and the iFunction? > Section creation is independent of these. This allows a user to do whatever they want here instead of using my default mechanisms. Thanks, Matt > Thanks, > Max > > > On 7 Mar 2017, at 18:21, Matthew Knepley wrote: > > On Tue, Mar 7, 2017 at 11:11 AM, Maximilian Hartig com> wrote: > >> >> On 7 Mar 2017, at 16:29, Matthew Knepley wrote: >> >> On Tue, Mar 7, 2017 at 3:28 AM, Maximilian Hartig > om> wrote: >> >>> It seems you are correct. In theory, the problem should not be over >>> constrained. It is 1/4 of a simple hollow cylinder geometry with rotational >>> symmetry around the z-axis. I restrict movement completely on the upper and >>> lower (z) end as well as movement in x- and y- direction respectively on >>> the symmetry planes. >>> I am not completely sure what I am looking at with the output of >>> -dm_petscsection_view. But these lines struck me as odd: >>> >>> >>> (5167) dim 3 offset 0 constrained 0 1 1 2 >>> (5168) dim 3 offset 6 constrained 0 1 1 2 >>> . >>> . >>> . >>> (5262) dim 3 offset 0 constrained 0 0 1 2 >>> (5263) dim 3 offset 6 constrained 0 0 1 2 >>> >>> >>> It seems that vertices that are part of the closures of both Face Sets >>> get restricted twice in their respective degree of freedom. >>> >> >> Yes, that is exactly what happens. >> >> >>> This does however also happen when restricting movement in x- direction >>> only for upper and lower faces. In that case without the solver producing >>> an error: >>> (20770) dim 3 offset 24 constrained 0 0 >>> (20771) dim 3 offset 30 constrained 0 0 >>> (20772) dim 3 offset 36 constrained 0 0 >>> (20773) dim 3 offset 42 constrained 0 0 >>> >> >> The fact that this does not SEGV is just luck. >> >> Now, I did not put in any guard against this because I was not sure what >> should happen. We could error if a local index is repeated, or >> we could ignore it. This seems unsafe if you try to constrain it with two >> different values, but there is no way for me to tell if the values are >> compatible. Thus I just believed whatever the user told me. >> >> >> What is the intention here? It would be straightforward to ignore >> duplicates I guess. >> >> Yes, ignoring duplicates would solve the problem then. I can think of no >> example where imposing two different Dirichlet BC on the same DOF of the >> same vertex would make sense (I might be wrong of course). That means the >> only issue is to determine wether the first or the second BC is the correct >> one to be imposed. >> I don?t know how I could filter out the vertices in question from the >> Label. I use GMSH to construct my meshes and could create a label for the >> edges without too much effort. But I cannot see an easy way to exclude them >> when imposing the BC. >> I tried to figure out where PETSC actually imposes the BC but got lost a >> bit in the source. Could you kindly point me towards the location? >> > > It is in stages. > > 1) You make a structure with AddBoundary() that has a Label and function > for boundary values > > 2) The PetscSection gets created with stores which points have constraints > and which components they affect > > 3) When global Vecs are made, these constraints are left out > > 4) When local Vecs are made, they are left in > > 5) DMPlexInsertBoundaryValues() is called on local Vecs, and puts in the > values from your functions. This usually happens > when you copy the solutions values from the global Vec to a local Vec > to being assembly. > > Thanks, > > Matt > > >> Thanks, >> Max >> >> >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Max >>> >>> On 6 Mar 2017, at 14:43, Matthew Knepley wrote: >>> >>> On Mon, Mar 6, 2017 at 8:38 AM, Maximilian Hartig < >>> imilian.hartig at gmail.com> wrote: >>> >>>> Of course, please find the source as well as the mesh attached below. I >>>> run with: >>>> >>>> -def_petscspace_order 2 -vel_petscspace_order 2 -snes_monitor >>>> -snes_converged_reason -ksp_converged_reason -ksp_monitor _true_residual >>>> -ksp_type fgmres -pc_type sor >>>> >>> >>> This sounds like over-constraining a point to me. I will try and run it >>> soon, but I have a full schedule this week. The easiest >>> way to see if this is happening should be to print out the Section that >>> gets made >>> >>> -dm_petscsection_view >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks, >>>> Max >>>> >>>> >>>> >>>> >>>> On 4 Mar 2017, at 11:34, Sander Arens wrote: >>>> >>>> Hmm, strange you also get the error in serial. Can you maybe send a >>>> minimal working which demonstrates the error? >>>> >>>> Thanks, >>>> Sander >>>> >>>> On 3 March 2017 at 23:07, Maximilian Hartig >>>> wrote: >>>> >>>>> Yes Sander, your assessment is correct. I use DMPlex and specify the >>>>> BC using DMLabel. I do however get this error also when running in serial. >>>>> >>>>> Thanks, >>>>> Max >>>>> >>>>> On 3 Mar 2017, at 22:14, Matthew Knepley wrote: >>>>> >>>>> On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens >>>>> wrote: >>>>> >>>>>> Max, >>>>>> >>>>>> I'm assuming you use DMPlex for your mesh? If so, did you only >>>>>> specify the faces in the DMLabel (and not vertices or edges). Do you get >>>>>> this error only in parallel? >>>>>> >>>>>> If so, I can confirm this bug. I submitted a pull request for this >>>>>> yesterday. >>>>>> >>>>> >>>>> Yep, I saw Sander's pull request. I will get in merged in tomorrow >>>>> when I get home to Houston. >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> On 3 March 2017 at 18:43, Lukas van de Wiel >>>>> com> wrote: >>>>>> >>>>>>> You have apparently preallocated the non-zeroes of you matrix, and >>>>>>> the room was insufficient to accommodate all your equations. >>>>>>> >>>>>>> What happened after you tried: >>>>>>> >>>>>>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>>>>>> >>>>>>> >>>>>>> Cheers >>>>>>> Lukas >>>>>>> >>>>>>> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig < >>>>>>> imilian.hartig at gmail.com> wrote: >>>>>>> >>>>>>>> Hello, >>>>>>>> >>>>>>>> I am working on a transient structural FEM code with PETSc. I >>>>>>>> managed to create a slow but functioning program with the use of petscFE >>>>>>>> and a TS solver. The code runs fine until I try to restrict movement in all >>>>>>>> three spatial directions for one face. I then get the error which is >>>>>>>> attached below. >>>>>>>> So apparently DMPlexMatSetClosure tries to write/read beyond what >>>>>>>> was priorly allocated. I do however not call MatSeqAIJSetPreallocation >>>>>>>> myself in the code. So I?m unsure where to start looking for the bug. In my >>>>>>>> understanding, PETSc should know from the DM how much space to allocate. >>>>>>>> Could you kindly give me a hint? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Max >>>>>>>> >>>>>>>> 0 SNES Function norm 2.508668036663e-06 >>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>> -------------------------------------------------------------- >>>>>>>> [0]PETSC ERROR: Argument out of range >>>>>>>> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >>>>>>>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>>>>>>> to turn off this check >>>>>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/pet >>>>>>>> sc/documentation/faq.html for trouble shooting. >>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>> v3.7.5-3223-g99077fc GIT Date: 2017-02-28 13:41:43 -0600 >>>>>>>> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by >>>>>>>> hartig Fri Mar 3 17:55:57 2017 >>>>>>>> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel >>>>>>>> --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc >>>>>>>> --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc >>>>>>>> --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort >>>>>>>> --download-ml >>>>>>>> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in >>>>>>>> /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >>>>>>>> [0]PETSC ERROR: #2 MatSetValues() line 1270 in >>>>>>>> /home/hartig/petsc/src/mat/interface/matrix.c >>>>>>>> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >>>>>>>> [0]mat for sieve point 60 >>>>>>>> [0]mat row indices[0] = 41754 >>>>>>>> [0]mat row indices[1] = 41755 >>>>>>>> [0]mat row indices[2] = 41756 >>>>>>>> [0]mat row indices[3] = 41760 >>>>>>>> [0]mat row indices[4] = 41761 >>>>>>>> [0]mat row indices[5] = 41762 >>>>>>>> [0]mat row indices[6] = 41766 >>>>>>>> [0]mat row indices[7] = -41768 >>>>>>>> [0]mat row indices[8] = 41767 >>>>>>>> [0]mat row indices[9] = 41771 >>>>>>>> [0]mat row indices[10] = -41773 >>>>>>>> [0]mat row indices[11] = 41772 >>>>>>>> [0]mat row indices[12] = 41776 >>>>>>>> [0]mat row indices[13] = 41777 >>>>>>>> [0]mat row indices[14] = 41778 >>>>>>>> [0]mat row indices[15] = 41782 >>>>>>>> [0]mat row indices[16] = -41784 >>>>>>>> [0]mat row indices[17] = 41783 >>>>>>>> [0]mat row indices[18] = 261 >>>>>>>> [0]mat row indices[19] = -263 >>>>>>>> [0]mat row indices[20] = 262 >>>>>>>> [0]mat row indices[21] = 24318 >>>>>>>> [0]mat row indices[22] = 24319 >>>>>>>> [0]mat row indices[23] = 24320 >>>>>>>> [0]mat row indices[24] = -7 >>>>>>>> [0]mat row indices[25] = -8 >>>>>>>> [0]mat row indices[26] = 6 >>>>>>>> [0]mat row indices[27] = 1630 >>>>>>>> [0]mat row indices[28] = -1632 >>>>>>>> [0]mat row indices[29] = 1631 >>>>>>>> [0]mat row indices[30] = 41757 >>>>>>>> [0]mat row indices[31] = 41758 >>>>>>>> [0]mat row indices[32] = 41759 >>>>>>>> [0]mat row indices[33] = 41763 >>>>>>>> [0]mat row indices[34] = 41764 >>>>>>>> [0]mat row indices[35] = 41765 >>>>>>>> [0]mat row indices[36] = 41768 >>>>>>>> [0]mat row indices[37] = 41769 >>>>>>>> [0]mat row indices[38] = 41770 >>>>>>>> [0]mat row indices[39] = 41773 >>>>>>>> [0]mat row indices[40] = 41774 >>>>>>>> [0]mat row indices[41] = 41775 >>>>>>>> [0]mat row indices[42] = 41779 >>>>>>>> [0]mat row indices[43] = 41780 >>>>>>>> [0]mat row indices[44] = 41781 >>>>>>>> [0]mat row indices[45] = 41784 >>>>>>>> [0]mat row indices[46] = 41785 >>>>>>>> [0]mat row indices[47] = 41786 >>>>>>>> [0]mat row indices[48] = 263 >>>>>>>> [0]mat row indices[49] = 264 >>>>>>>> [0]mat row indices[50] = 265 >>>>>>>> [0]mat row indices[51] = 24321 >>>>>>>> [0]mat row indices[52] = 24322 >>>>>>>> [0]mat row indices[53] = 24323 >>>>>>>> [0]mat row indices[54] = 5 >>>>>>>> [0]mat row indices[55] = 6 >>>>>>>> [0]mat row indices[56] = 7 >>>>>>>> [0]mat row indices[57] = 1632 >>>>>>>> [0]mat row indices[58] = 1633 >>>>>>>> [0]mat row indices[59] = 1634 >>>>>>>> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 1.18146 >>>>>>>> 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 -0.210511 >>>>>>>> 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 0.0344031 >>>>>>>> -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 -0.00868618 >>>>>>>> -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 >>>>>>>> -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 >>>>>>>> -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 >>>>>>>> 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 >>>>>>>> -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>>>>>> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 >>>>>>>> -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 >>>>>>>> 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 >>>>>>>> -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 >>>>>>>> -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >>>>>>>> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 >>>>>>>> -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 >>>>>>>> -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 >>>>>>>> -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 >>>>>>>> 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 >>>>>>>> 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 >>>>>>>> -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 >>>>>>>> -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 >>>>>>>> -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 >>>>>>>> -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>>> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 >>>>>>>> 0.165135 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 >>>>>>>> -0.0418706 -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 >>>>>>>> -0.0117288 -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 >>>>>>>> -0.0674185 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. >>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.56483e-11 >>>>>>>> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 >>>>>>>> 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 >>>>>>>> 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 >>>>>>>> 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 >>>>>>>> -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 >>>>>>>> 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 >>>>>>>> -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 >>>>>>>> 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 >>>>>>>> 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>>> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 >>>>>>>> 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 >>>>>>>> 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 >>>>>>>> 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 >>>>>>>> -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. >>>>>>>> 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>>>>>> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 >>>>>>>> -2.37741 -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 >>>>>>>> -0.107528 0.11009 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 >>>>>>>> 0.158254 0.00657075 -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 >>>>>>>> -0.0983732 0.0825675 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 >>>>>>>> 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 >>>>>>>> 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 >>>>>>>> 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 >>>>>>>> -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>>> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 >>>>>>>> -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 >>>>>>>> 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 >>>>>>>> -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 >>>>>>>> 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 >>>>>>>> -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 >>>>>>>> -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. >>>>>>>> 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. >>>>>>>> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 >>>>>>>> -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 >>>>>>>> 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 >>>>>>>> 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. >>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>>> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 >>>>>>>> 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 >>>>>>>> -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. >>>>>>>> -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 >>>>>>>> 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. >>>>>>>> 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>>> 0. -1.04322e-11 >>>>>>>> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 >>>>>>>> -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 >>>>>>>> -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 >>>>>>>> -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 >>>>>>>> -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 >>>>>>>> -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 >>>>>>>> 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 >>>>>>>> -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 >>>>>>>> -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>>>>>>> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 >>>>>>>> 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 >>>>>>>> -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 >>>>>>>> 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 >>>>>>>> -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >>>>>>>> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 >>>>>>>> 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 >>>>>>>> -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 >>>>>>>> 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 >>>>>>>> -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. >>>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>>> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 >>>>>>>> -0.0117288 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 >>>>>>>> 0.0658628 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 >>>>>>>> 0.197005 -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 >>>>>>>> -0.00531859 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>>> 2.60805e-12 0. >>>>>>>> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 -0.0225723 >>>>>>>> 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 -0.0412838 >>>>>>>> -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 -0.0360388 >>>>>>>> 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 -0.029556 >>>>>>>> -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>>> 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>>>>>> 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>>>> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. >>>>>>>> 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 >>>>>>>> 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. >>>>>>>> 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>>> 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>> 2.60805e-12 0. 0. >>>>>>>> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 >>>>>>>> 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 >>>>>>>> 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 >>>>>>>> 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>> 2.60805e-12 0. >>>>>>>> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 >>>>>>>> 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 >>>>>>>> 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 >>>>>>>> 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>>>> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 >>>>>>>> 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 >>>>>>>> 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 >>>>>>>> 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 >>>>>>>> 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 >>>>>>>> -0.0239537 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 >>>>>>>> 0.038569 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 >>>>>>>> -0.045728 -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 >>>>>>>> 0.0141688 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>> 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >>>>>>>> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 >>>>>>>> -0.0674185 -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 >>>>>>>> 0.0412838 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 >>>>>>>> -0.00531859 -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 >>>>>>>> 0.0412838 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >>>>>>>> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 >>>>>>>> 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 >>>>>>>> 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 >>>>>>>> -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. >>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. >>>>>>>> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 >>>>>>>> 0.000426133 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 >>>>>>>> 0.115568 -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 >>>>>>>> -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 >>>>>>>> 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >>>>>>>> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 >>>>>>>> 0.085579 -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 >>>>>>>> 0. 0. -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 >>>>>>>> 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 >>>>>>>> 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>> 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>>>> 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >>>>>>>> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. >>>>>>>> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -9.96708e-10 0. >>>>>>>> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -9.96708e-10 >>>>>>>> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. >>>>>>>> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 0. >>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 >>>>>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. >>>>>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 0. >>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 >>>>>>>> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. >>>>>>>> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 0. >>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 >>>>>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. >>>>>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 0. >>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 >>>>>>>> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. >>>>>>>> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -6.64472e-10 0. >>>>>>>> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -6.64472e-10 >>>>>>>> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>> 1.66118e-10 0. 0. >>>>>>>> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>> 1.66118e-10 0. >>>>>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>> 1.66118e-10 >>>>>>>> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>> 1.66118e-10 0. 0. >>>>>>>> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>> 1.66118e-10 0. >>>>>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>> 1.66118e-10 >>>>>>>> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>>> 1.66118e-10 0. 0. >>>>>>>> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>>> 1.66118e-10 0. >>>>>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>>> 1.66118e-10 >>>>>>>> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>> 9.96708e-10 0. 0. >>>>>>>> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>> 9.96708e-10 0. >>>>>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>> 9.96708e-10 >>>>>>>> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in >>>>>>>> /home/hartig/petsc/src/dm/impls/plex/plex.c >>>>>>>> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in >>>>>>>> /home/hartig/petsc/src/snes/utils/dmplexsnes.c >>>>>>>> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in >>>>>>>> /home/hartig/petsc/src/ts/utils/dmplexts.c >>>>>>>> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in >>>>>>>> /home/hartig/petsc/src/ts/utils/dmlocalts.c >>>>>>>> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in >>>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>>> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in >>>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>>> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in >>>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>>> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in >>>>>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>>>>> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in >>>>>>>> /home/hartig/petsc/src/snes/impls/ls/ls.c >>>>>>>> [0]PETSC ERROR: #12 SNESSolve() line 3967 in >>>>>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>>>>> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in >>>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>>> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in >>>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>>> [0]PETSC ERROR: #15 TSStep() line 3809 in >>>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> >>>>> >>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Mar 9 11:01:11 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 9 Mar 2017 11:01:11 -0600 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: <1489045797753.85105@marin.nl> References: <1489045797753.85105@marin.nl> Message-ID: <76CC8BF6-873D-4162-A4B0-CAD61DE594C4@mcs.anl.gov> > On Mar 9, 2017, at 1:49 AM, Klaij, Christiaan wrote: > > Barry, > > I came across the same problem and decided to use KSPSetNormType > instead of KSPSetPCSide. Do I understand correctly that CG with > KSP_NORM_UNPRECONDITIONED would be as efficient as with > KSP_NORM_PRECONDITIONED? Yes, it as efficient. > Since PC_RIGHT is not supported, I was > under the impression that the former would basically be the > latter with an additional true residual evaluation for the > convergence monitor, which would be less efficient. It is true for GMRES with left preconditioning (but not right) that computing the unpreconditioned residual norm adds a good amount of additional work but it is not true for CG, this is because CG, by its nature tracks both y = A x and B y so either is equally available to compute its norm. GMRES with left preconditioner computes B A something at each iteration but doesn't directly compute the residual (either preconditioned or not) it uses a recursive formula for the residual norm. > > Chris > >> On Mar 8, 2017, at 10:47 AM, Kong, Fande wrote: >> >> Thanks Barry, >> >> We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. >> >> I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? > > No, mathematically it makes sense to do it on the right. It is just that the PETSc code was never written to support it on the right. One reason is that CG is interesting that you can run with the true residual or the preconditioned residual with left preconditioning, hence less incentive to ever bother writing it to support right preconditioning. For completeness we should support right as well as symmetric. > > Barry > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Your-future-is-Blue-Blueweek-April-1013.htm > From jed at jedbrown.org Sat Mar 11 18:28:57 2017 From: jed at jedbrown.org (Jed Brown) Date: Sat, 11 Mar 2017 17:28:57 -0700 Subject: [petsc-users] PETSc User Meeting 2017, June 14-16 in Boulder, Colorado Message-ID: <87y3wbtk1i.fsf@jedbrown.org> We'd like to invite you to join us at the 2017 PETSc User Meeting held at the University of Colorado Boulder on June 14-16, 2017. http://www.mcs.anl.gov/petsc/meetings/2017/ The first day consists of tutorials on various aspects and features of PETSc. The second and third days will be devoted to exchange, discussions, and a refinement of strategies for the future with our users. We encourage you to present work illustrating your own use of PETSc, for example in applications or in libraries built on top of PETSc. Registration for the PETSc User Meeting 2017 is free for students and $75 for non-students. We can host a maximum of 150 participants, so register soon (and by May 15). http://www.eventzilla.net/web/e/petsc-user-meeting-2017-2138890185 We are also offering low-cost lodging on campus. A lodging registration site will be available soon and announced here and on the website. Thanks to the generosity of Intel, we will be able to offer a limited number of student travel grants. We are also soliciting additional sponsors -- please contact us if you are interested. We are looking forward to seeing you in Boulder! Please contact us at petsc2017 at mcs.anl.gov if you have any questions or comments. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From ztdepyahoo at 163.com Sun Mar 12 13:31:28 2017 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Mon, 13 Mar 2017 02:31:28 +0800 (CST) Subject: [petsc-users] How to solve the Ax=b with parallel SOR or parallel GS only. Message-ID: <5c6bf9a.171.15ac3ca21af.Coremail.ztdepyahoo@163.com> How to solve the Ax=b with parallel SOR or parallel GS only. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Mar 12 14:21:39 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 12 Mar 2017 14:21:39 -0500 Subject: [petsc-users] How to solve the Ax=b with parallel SOR or parallel GS only. In-Reply-To: <5c6bf9a.171.15ac3ca21af.Coremail.ztdepyahoo@163.com> References: <5c6bf9a.171.15ac3ca21af.Coremail.ztdepyahoo@163.com> Message-ID: <6BC4CB50-7B27-4B7D-BA9C-518991EE7BA2@mcs.anl.gov> We don't have a "true" parallel SOR or GS that uses coloring. We only support a domain decomposition type SOR/GS where each process updates its local values, then they exchange ghost points and repeat. You can us -ksp_type richardson -pc_type sor for this. http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCSOR.html lists all the options. > On Mar 12, 2017, at 1:31 PM, ??? wrote: > > How to solve the Ax=b with parallel SOR or parallel GS only. > > > From fande.kong at inl.gov Mon Mar 13 15:16:49 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Mon, 13 Mar 2017 14:16:49 -0600 Subject: [petsc-users] KSPNormType natural Message-ID: Hi All, What is the definition of KSPNormType natural? It is easy to understand none, preconditioned, and unpreconditioned, but not natural. Fande Kong, -------------- next part -------------- An HTML attachment was scrubbed... URL: From patrick.sanan at gmail.com Mon Mar 13 17:28:58 2017 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Mon, 13 Mar 2017 23:28:58 +0100 Subject: [petsc-users] KSPNormType natural In-Reply-To: References: Message-ID: This is something that arises with CG and related methods. As in the previous thread, the "left preconditioner" in CG can be though of as an inner product B which allows you to use CG with the Krylov spaces K_k(BA;Bb) instead of K_k(A;b), while retaining the property that the A-norm of the error is minimal over each successive space. The natural residual norm is the residual norm with respect to this inner product, ||r||_M = , which you might end up coding up as (b-Ax)^TB(b-Ax). Noting r = Ae, where e is the error, you could also write this as e^TABAe, as in the notes in the KSPCG implementation. In the standard preconditioned CG algorithm, you compute the natural residual norm in the process, so you can monitor convergence in this norm without computing any additional reductions/inner products/norms. On Mon, Mar 13, 2017 at 9:16 PM, Kong, Fande wrote: > Hi All, > > What is the definition of KSPNormType natural? It is easy to understand > none, preconditioned, and unpreconditioned, but not natural. > > Fande Kong, From jed at jedbrown.org Mon Mar 13 18:55:54 2017 From: jed at jedbrown.org (Jed Brown) Date: Mon, 13 Mar 2017 17:55:54 -0600 Subject: [petsc-users] KSPNormType natural In-Reply-To: References: Message-ID: <87k27swx2t.fsf@jedbrown.org> "Kong, Fande" writes: > Hi All, > > What is the definition of KSPNormType natural? It is easy to understand > none, preconditioned, and unpreconditioned, but not natural. It is the energy norm. It only makes sense for an SPD operator and SPD preconditioner. ||x||_{P^{-1/2} A P^{-T/2}} though of course it is not computed this way. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From Sander.Arens at ugent.be Tue Mar 14 10:17:25 2017 From: Sander.Arens at ugent.be (Sander Arens) Date: Tue, 14 Mar 2017 16:17:25 +0100 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> <69E95D27-72CD-4BFA-AC62-907727998DCB@gmail.com> <3383F285-34C3-48A2-BF7C-56ED80FDF236@gmail.com> Message-ID: I updated my pull request with a fix. If Matt approves and merges to master, the problem should be solved. Thanks, Sander On 9 March 2017 at 15:00, Matthew Knepley wrote: > On Thu, Mar 9, 2017 at 7:45 AM, Maximilian Hartig < > imilian.hartig at gmail.com> wrote: > >> Ok thank you, so can I just do something along the lines of: >> PetscSectionGetFieldConstraintDof(?) >> to find the overconstrained vertices and then correct them manually with >> PetscSectionSetFieldConstraintDof() >> PetscSectionSetFieldConstraintIndices() >> ? >> > > You can do exactly that. And that is the same thing I would do, although I > would probably go to > the place where they are being input > > https://bitbucket.org/petsc/petsc/src/79f3641cdf8f54d0fc7a5ae1e04e08 > 87d8c00e9b/src/dm/impls/plex/plex.c?at=master&fileviewer= > file-view-default#plex.c-3164 > > and try to filter them out. However, its a little tricky since the space > has already been allocated and will > have to be adjusted. It will likely take a more thorough going rewrite to > do it. > > Or will this mess up the Jacobian and the iFunction? >> > > Section creation is independent of these. This allows a user to do > whatever they want here instead of > using my default mechanisms. > > Thanks, > > Matt > > >> Thanks, >> Max >> >> >> On 7 Mar 2017, at 18:21, Matthew Knepley wrote: >> >> On Tue, Mar 7, 2017 at 11:11 AM, Maximilian Hartig < >> imilian.hartig at gmail.com> wrote: >> >>> >>> On 7 Mar 2017, at 16:29, Matthew Knepley wrote: >>> >>> On Tue, Mar 7, 2017 at 3:28 AM, Maximilian Hartig < >>> imilian.hartig at gmail.com> wrote: >>> >>>> It seems you are correct. In theory, the problem should not be over >>>> constrained. It is 1/4 of a simple hollow cylinder geometry with rotational >>>> symmetry around the z-axis. I restrict movement completely on the upper and >>>> lower (z) end as well as movement in x- and y- direction respectively on >>>> the symmetry planes. >>>> I am not completely sure what I am looking at with the output of >>>> -dm_petscsection_view. But these lines struck me as odd: >>>> >>>> >>>> (5167) dim 3 offset 0 constrained 0 1 1 2 >>>> (5168) dim 3 offset 6 constrained 0 1 1 2 >>>> . >>>> . >>>> . >>>> (5262) dim 3 offset 0 constrained 0 0 1 2 >>>> (5263) dim 3 offset 6 constrained 0 0 1 2 >>>> >>>> >>>> It seems that vertices that are part of the closures of both Face Sets >>>> get restricted twice in their respective degree of freedom. >>>> >>> >>> Yes, that is exactly what happens. >>> >>> >>>> This does however also happen when restricting movement in x- direction >>>> only for upper and lower faces. In that case without the solver producing >>>> an error: >>>> (20770) dim 3 offset 24 constrained 0 0 >>>> (20771) dim 3 offset 30 constrained 0 0 >>>> (20772) dim 3 offset 36 constrained 0 0 >>>> (20773) dim 3 offset 42 constrained 0 0 >>>> >>> >>> The fact that this does not SEGV is just luck. >>> >>> Now, I did not put in any guard against this because I was not sure what >>> should happen. We could error if a local index is repeated, or >>> we could ignore it. This seems unsafe if you try to constrain it with >>> two different values, but there is no way for me to tell if the values are >>> compatible. Thus I just believed whatever the user told me. >>> >>> >>> What is the intention here? It would be straightforward to ignore >>> duplicates I guess. >>> >>> Yes, ignoring duplicates would solve the problem then. I can think of no >>> example where imposing two different Dirichlet BC on the same DOF of the >>> same vertex would make sense (I might be wrong of course). That means the >>> only issue is to determine wether the first or the second BC is the correct >>> one to be imposed. >>> I don?t know how I could filter out the vertices in question from the >>> Label. I use GMSH to construct my meshes and could create a label for the >>> edges without too much effort. But I cannot see an easy way to exclude them >>> when imposing the BC. >>> I tried to figure out where PETSC actually imposes the BC but got lost a >>> bit in the source. Could you kindly point me towards the location? >>> >> >> It is in stages. >> >> 1) You make a structure with AddBoundary() that has a Label and function >> for boundary values >> >> 2) The PetscSection gets created with stores which points have >> constraints and which components they affect >> >> 3) When global Vecs are made, these constraints are left out >> >> 4) When local Vecs are made, they are left in >> >> 5) DMPlexInsertBoundaryValues() is called on local Vecs, and puts in the >> values from your functions. This usually happens >> when you copy the solutions values from the global Vec to a local Vec >> to being assembly. >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Max >>> >>> >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks, >>>> Max >>>> >>>> On 6 Mar 2017, at 14:43, Matthew Knepley wrote: >>>> >>>> On Mon, Mar 6, 2017 at 8:38 AM, Maximilian Hartig < >>>> imilian.hartig at gmail.com> wrote: >>>> >>>>> Of course, please find the source as well as the mesh attached below. >>>>> I run with: >>>>> >>>>> -def_petscspace_order 2 -vel_petscspace_order 2 -snes_monitor >>>>> -snes_converged_reason -ksp_converged_reason -ksp_monitor _true_residual >>>>> -ksp_type fgmres -pc_type sor >>>>> >>>> >>>> This sounds like over-constraining a point to me. I will try and run it >>>> soon, but I have a full schedule this week. The easiest >>>> way to see if this is happening should be to print out the Section that >>>> gets made >>>> >>>> -dm_petscsection_view >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thanks, >>>>> Max >>>>> >>>>> >>>>> >>>>> >>>>> On 4 Mar 2017, at 11:34, Sander Arens wrote: >>>>> >>>>> Hmm, strange you also get the error in serial. Can you maybe send a >>>>> minimal working which demonstrates the error? >>>>> >>>>> Thanks, >>>>> Sander >>>>> >>>>> On 3 March 2017 at 23:07, Maximilian Hartig >>>>> wrote: >>>>> >>>>>> Yes Sander, your assessment is correct. I use DMPlex and specify the >>>>>> BC using DMLabel. I do however get this error also when running in serial. >>>>>> >>>>>> Thanks, >>>>>> Max >>>>>> >>>>>> On 3 Mar 2017, at 22:14, Matthew Knepley wrote: >>>>>> >>>>>> On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens >>>>>> wrote: >>>>>> >>>>>>> Max, >>>>>>> >>>>>>> I'm assuming you use DMPlex for your mesh? If so, did you only >>>>>>> specify the faces in the DMLabel (and not vertices or edges). Do you get >>>>>>> this error only in parallel? >>>>>>> >>>>>>> If so, I can confirm this bug. I submitted a pull request for this >>>>>>> yesterday. >>>>>>> >>>>>> >>>>>> Yep, I saw Sander's pull request. I will get in merged in tomorrow >>>>>> when I get home to Houston. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> On 3 March 2017 at 18:43, Lukas van de Wiel < >>>>>>> lukas.drinkt.thee at gmail.com> wrote: >>>>>>> >>>>>>>> You have apparently preallocated the non-zeroes of you matrix, and >>>>>>>> the room was insufficient to accommodate all your equations. >>>>>>>> >>>>>>>> What happened after you tried: >>>>>>>> >>>>>>>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>>>>>>> >>>>>>>> >>>>>>>> Cheers >>>>>>>> Lukas >>>>>>>> >>>>>>>> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig < >>>>>>>> imilian.hartig at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hello, >>>>>>>>> >>>>>>>>> I am working on a transient structural FEM code with PETSc. I >>>>>>>>> managed to create a slow but functioning program with the use of petscFE >>>>>>>>> and a TS solver. The code runs fine until I try to restrict movement in all >>>>>>>>> three spatial directions for one face. I then get the error which is >>>>>>>>> attached below. >>>>>>>>> So apparently DMPlexMatSetClosure tries to write/read beyond what >>>>>>>>> was priorly allocated. I do however not call MatSeqAIJSetPreallocation >>>>>>>>> myself in the code. So I?m unsure where to start looking for the bug. In my >>>>>>>>> understanding, PETSc should know from the DM how much space to allocate. >>>>>>>>> Could you kindly give me a hint? >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> Max >>>>>>>>> >>>>>>>>> 0 SNES Function norm 2.508668036663e-06 >>>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>>> -------------------------------------------------------------- >>>>>>>>> [0]PETSC ERROR: Argument out of range >>>>>>>>> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >>>>>>>>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>>>>>>>> to turn off this check >>>>>>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/pet >>>>>>>>> sc/documentation/faq.html for trouble shooting. >>>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>>> v3.7.5-3223-g99077fc GIT Date: 2017-02-28 13:41:43 -0600 >>>>>>>>> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by >>>>>>>>> hartig Fri Mar 3 17:55:57 2017 >>>>>>>>> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel >>>>>>>>> --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc >>>>>>>>> --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc >>>>>>>>> --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort >>>>>>>>> --download-ml >>>>>>>>> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in >>>>>>>>> /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >>>>>>>>> [0]PETSC ERROR: #2 MatSetValues() line 1270 in >>>>>>>>> /home/hartig/petsc/src/mat/interface/matrix.c >>>>>>>>> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >>>>>>>>> [0]mat for sieve point 60 >>>>>>>>> [0]mat row indices[0] = 41754 >>>>>>>>> [0]mat row indices[1] = 41755 >>>>>>>>> [0]mat row indices[2] = 41756 >>>>>>>>> [0]mat row indices[3] = 41760 >>>>>>>>> [0]mat row indices[4] = 41761 >>>>>>>>> [0]mat row indices[5] = 41762 >>>>>>>>> [0]mat row indices[6] = 41766 >>>>>>>>> [0]mat row indices[7] = -41768 >>>>>>>>> [0]mat row indices[8] = 41767 >>>>>>>>> [0]mat row indices[9] = 41771 >>>>>>>>> [0]mat row indices[10] = -41773 >>>>>>>>> [0]mat row indices[11] = 41772 >>>>>>>>> [0]mat row indices[12] = 41776 >>>>>>>>> [0]mat row indices[13] = 41777 >>>>>>>>> [0]mat row indices[14] = 41778 >>>>>>>>> [0]mat row indices[15] = 41782 >>>>>>>>> [0]mat row indices[16] = -41784 >>>>>>>>> [0]mat row indices[17] = 41783 >>>>>>>>> [0]mat row indices[18] = 261 >>>>>>>>> [0]mat row indices[19] = -263 >>>>>>>>> [0]mat row indices[20] = 262 >>>>>>>>> [0]mat row indices[21] = 24318 >>>>>>>>> [0]mat row indices[22] = 24319 >>>>>>>>> [0]mat row indices[23] = 24320 >>>>>>>>> [0]mat row indices[24] = -7 >>>>>>>>> [0]mat row indices[25] = -8 >>>>>>>>> [0]mat row indices[26] = 6 >>>>>>>>> [0]mat row indices[27] = 1630 >>>>>>>>> [0]mat row indices[28] = -1632 >>>>>>>>> [0]mat row indices[29] = 1631 >>>>>>>>> [0]mat row indices[30] = 41757 >>>>>>>>> [0]mat row indices[31] = 41758 >>>>>>>>> [0]mat row indices[32] = 41759 >>>>>>>>> [0]mat row indices[33] = 41763 >>>>>>>>> [0]mat row indices[34] = 41764 >>>>>>>>> [0]mat row indices[35] = 41765 >>>>>>>>> [0]mat row indices[36] = 41768 >>>>>>>>> [0]mat row indices[37] = 41769 >>>>>>>>> [0]mat row indices[38] = 41770 >>>>>>>>> [0]mat row indices[39] = 41773 >>>>>>>>> [0]mat row indices[40] = 41774 >>>>>>>>> [0]mat row indices[41] = 41775 >>>>>>>>> [0]mat row indices[42] = 41779 >>>>>>>>> [0]mat row indices[43] = 41780 >>>>>>>>> [0]mat row indices[44] = 41781 >>>>>>>>> [0]mat row indices[45] = 41784 >>>>>>>>> [0]mat row indices[46] = 41785 >>>>>>>>> [0]mat row indices[47] = 41786 >>>>>>>>> [0]mat row indices[48] = 263 >>>>>>>>> [0]mat row indices[49] = 264 >>>>>>>>> [0]mat row indices[50] = 265 >>>>>>>>> [0]mat row indices[51] = 24321 >>>>>>>>> [0]mat row indices[52] = 24322 >>>>>>>>> [0]mat row indices[53] = 24323 >>>>>>>>> [0]mat row indices[54] = 5 >>>>>>>>> [0]mat row indices[55] = 6 >>>>>>>>> [0]mat row indices[56] = 7 >>>>>>>>> [0]mat row indices[57] = 1632 >>>>>>>>> [0]mat row indices[58] = 1633 >>>>>>>>> [0]mat row indices[59] = 1634 >>>>>>>>> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 >>>>>>>>> 1.18146 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 >>>>>>>>> -0.210511 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 >>>>>>>>> 0.0344031 -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 >>>>>>>>> -0.00868618 -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. >>>>>>>>> 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 >>>>>>>>> -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 >>>>>>>>> -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 >>>>>>>>> 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 >>>>>>>>> -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>>>>>>> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 >>>>>>>>> -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 >>>>>>>>> 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 >>>>>>>>> -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 >>>>>>>>> -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >>>>>>>>> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 1.1468 >>>>>>>>> -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 >>>>>>>>> -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 >>>>>>>>> -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 >>>>>>>>> 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 >>>>>>>>> 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 >>>>>>>>> -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 >>>>>>>>> -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 >>>>>>>>> -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 >>>>>>>>> -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>>>> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 >>>>>>>>> 0.165135 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 >>>>>>>>> -0.0418706 -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 >>>>>>>>> -0.0117288 -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 >>>>>>>>> -0.0674185 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. >>>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.56483e-11 >>>>>>>>> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 3.70665 >>>>>>>>> 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 >>>>>>>>> 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 >>>>>>>>> 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 >>>>>>>>> -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 >>>>>>>>> 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 >>>>>>>>> -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 >>>>>>>>> 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 >>>>>>>>> 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>>>> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 >>>>>>>>> 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 >>>>>>>>> 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 >>>>>>>>> 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 >>>>>>>>> -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. >>>>>>>>> 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>>>>>>> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 >>>>>>>>> -2.37741 -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 >>>>>>>>> -0.107528 0.11009 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 >>>>>>>>> 0.158254 0.00657075 -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 >>>>>>>>> -0.0983732 0.0825675 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 >>>>>>>>> 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 >>>>>>>>> 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 >>>>>>>>> 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 >>>>>>>>> -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>>>> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 -0.252027 >>>>>>>>> -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 0.165135 >>>>>>>>> 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 0.0128652 >>>>>>>>> -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 0.0564359 >>>>>>>>> 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>>> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 -2.44501 >>>>>>>>> -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 0. 1.18146 >>>>>>>>> -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 0.18 0. >>>>>>>>> 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. >>>>>>>>> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 >>>>>>>>> -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 >>>>>>>>> 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 >>>>>>>>> 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. >>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>>>> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 >>>>>>>>> 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 >>>>>>>>> -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. >>>>>>>>> -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 >>>>>>>>> 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. >>>>>>>>> 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>>>> 0. -1.04322e-11 >>>>>>>>> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 >>>>>>>>> -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 >>>>>>>>> -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 >>>>>>>>> -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 >>>>>>>>> -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 >>>>>>>>> -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 >>>>>>>>> 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 >>>>>>>>> -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 >>>>>>>>> -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>>>>>>>> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 >>>>>>>>> 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 >>>>>>>>> -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 >>>>>>>>> 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 >>>>>>>>> -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >>>>>>>>> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 >>>>>>>>> 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 >>>>>>>>> -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 >>>>>>>>> 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 >>>>>>>>> -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. >>>>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>>>> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 >>>>>>>>> -0.0117288 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 >>>>>>>>> 0.0658628 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 >>>>>>>>> 0.197005 -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 >>>>>>>>> -0.00531859 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>>>> 2.60805e-12 0. >>>>>>>>> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 >>>>>>>>> -0.0225723 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 >>>>>>>>> -0.0412838 -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 >>>>>>>>> -0.0360388 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 >>>>>>>>> -0.029556 -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>>>> 2.60805e-12 >>>>>>>>> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. >>>>>>>>> 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 >>>>>>>>> 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. >>>>>>>>> 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>>>> 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>>> 2.60805e-12 0. 0. >>>>>>>>> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 >>>>>>>>> 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 >>>>>>>>> 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 >>>>>>>>> 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>>> 2.60805e-12 0. >>>>>>>>> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. 1.8315e-18 >>>>>>>>> 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. 0.00641026 >>>>>>>>> 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. 0.00641026 >>>>>>>>> 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>>>>> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 >>>>>>>>> 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 >>>>>>>>> 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 >>>>>>>>> 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 >>>>>>>>> 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>>> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 >>>>>>>>> -0.0239537 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 >>>>>>>>> 0.038569 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 >>>>>>>>> -0.045728 -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 >>>>>>>>> 0.0141688 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>>> 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >>>>>>>>> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 >>>>>>>>> -0.0674185 -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 >>>>>>>>> 0.0412838 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 >>>>>>>>> -0.00531859 -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 >>>>>>>>> 0.0412838 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >>>>>>>>> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 >>>>>>>>> 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 >>>>>>>>> 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 >>>>>>>>> -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. >>>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. >>>>>>>>> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 >>>>>>>>> 0.000426133 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 >>>>>>>>> 0.115568 -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 >>>>>>>>> -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 >>>>>>>>> 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >>>>>>>>> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 >>>>>>>>> 0.085579 -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 >>>>>>>>> 0. 0. -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 >>>>>>>>> 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 >>>>>>>>> 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>>> 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>>>>> 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >>>>>>>>> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. >>>>>>>>> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -9.96708e-10 0. >>>>>>>>> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -9.96708e-10 >>>>>>>>> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. >>>>>>>>> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 0. >>>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 >>>>>>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. >>>>>>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 0. >>>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 >>>>>>>>> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. >>>>>>>>> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 0. >>>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 >>>>>>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. >>>>>>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 0. >>>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 >>>>>>>>> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. >>>>>>>>> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -6.64472e-10 0. >>>>>>>>> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -6.64472e-10 >>>>>>>>> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>> 1.66118e-10 0. 0. >>>>>>>>> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>> 1.66118e-10 0. >>>>>>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>> 1.66118e-10 >>>>>>>>> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>> 1.66118e-10 0. 0. >>>>>>>>> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>> 1.66118e-10 0. >>>>>>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>> 1.66118e-10 >>>>>>>>> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>>>> 1.66118e-10 0. 0. >>>>>>>>> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>>>> 1.66118e-10 0. >>>>>>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>>>> 1.66118e-10 >>>>>>>>> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>> 9.96708e-10 0. 0. >>>>>>>>> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>> 9.96708e-10 0. >>>>>>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>> 9.96708e-10 >>>>>>>>> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in >>>>>>>>> /home/hartig/petsc/src/dm/impls/plex/plex.c >>>>>>>>> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in >>>>>>>>> /home/hartig/petsc/src/snes/utils/dmplexsnes.c >>>>>>>>> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in >>>>>>>>> /home/hartig/petsc/src/ts/utils/dmplexts.c >>>>>>>>> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in >>>>>>>>> /home/hartig/petsc/src/ts/utils/dmlocalts.c >>>>>>>>> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in >>>>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>>>> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in >>>>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>>>> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in >>>>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>>>> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in >>>>>>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>>>>>> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in >>>>>>>>> /home/hartig/petsc/src/snes/impls/ls/ls.c >>>>>>>>> [0]PETSC ERROR: #12 SNESSolve() line 3967 in >>>>>>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>>>>>> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in >>>>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>>>> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in >>>>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>>>> [0]PETSC ERROR: #15 TSStep() line 3809 in >>>>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fangbowa at buffalo.edu Tue Mar 14 17:32:22 2017 From: fangbowa at buffalo.edu (Fangbo Wang) Date: Tue, 14 Mar 2017 18:32:22 -0400 Subject: [petsc-users] How to determine a reasonable relative tolerance to iteratively solve a linear system of equations? Message-ID: Hi, I know this is not a problem specific to PETSc, but I have this doubt for a long time and want to ask the experts here. Suppose I have a very large linear system of equations with 1.5 million unkowns. It is common to use relative tolerance as a stopping criteria. For a small linear system, I usually use 1e-6, or 1e-8, or 1e-10, etc. But for a very large linear system, do I need to use a relative tolerance much smaller than the previous I use? (Theoretically I think the relative tolerance has nothing related to system size). However, something very weird happens. I used 1e-7 as my relative tolerance for my linear system with 1.5 million unknows using conjugate gradient method with jacobi preconditioner, the solver can not converge to 1e-7 with 10,000 iterations. I can use a larger tolerance but the solution is not good. Any one have some advices? Thank you very much! Best regards, Fangbo Wang -- Fangbo Wang, PhD student Stochastic Geomechanics Research Group Department of Civil, Structural and Environmental Engineering University at Buffalo Email: *fangbowa at buffalo.edu * -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 14 17:42:27 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 14 Mar 2017 17:42:27 -0500 Subject: [petsc-users] How to determine a reasonable relative tolerance to iteratively solve a linear system of equations? In-Reply-To: References: Message-ID: <746BD51B-0986-4F29-8601-BBAA18637074@mcs.anl.gov> > On Mar 14, 2017, at 5:32 PM, Fangbo Wang wrote: > > Hi, > > I know this is not a problem specific to PETSc, but I have this doubt for a long time and want to ask the experts here. > > Suppose I have a very large linear system of equations with 1.5 million unkowns. It is common to use relative tolerance as a stopping criteria. > > For a small linear system, I usually use 1e-6, or 1e-8, or 1e-10, etc. But for a very large linear system, do I need to use a relative tolerance much smaller than the previous I use? (Theoretically I think the relative tolerance has nothing related to system size). > > However, something very weird happens. I used 1e-7 as my relative tolerance for my linear system with 1.5 million unknows using conjugate gradient method with jacobi preconditioner, the solver can not converge to 1e-7 with 10,000 iterations. I can use a larger tolerance but the solution is not good. This is not particularly weird. Jacobi preconditioning can perform very poorly depending on the structure of your matrix. So first you need a better preconditioner. Where does the matrix come from? This helps determine what preconditioner to use. For example, is it a pressure solve, a structural mechanics problem, a Stokes-like problem, a fully implicit cdf problem. Barry > > Any one have some advices? Thank you very much! > > Best regards, > > Fangbo Wang > > -- > Fangbo Wang, PhD student > Stochastic Geomechanics Research Group > Department of Civil, Structural and Environmental Engineering > University at Buffalo > Email: fangbowa at buffalo.edu From fangbowa at buffalo.edu Tue Mar 14 20:46:58 2017 From: fangbowa at buffalo.edu (Fangbo Wang) Date: Tue, 14 Mar 2017 21:46:58 -0400 Subject: [petsc-users] How to determine a reasonable relative tolerance to iteratively solve a linear system of equations? In-Reply-To: <746BD51B-0986-4F29-8601-BBAA18637074@mcs.anl.gov> References: <746BD51B-0986-4F29-8601-BBAA18637074@mcs.anl.gov> Message-ID: it is a solid mechanics problem. On Tue, Mar 14, 2017 at 6:42 PM, Barry Smith wrote: > > > On Mar 14, 2017, at 5:32 PM, Fangbo Wang wrote: > > > > Hi, > > > > I know this is not a problem specific to PETSc, but I have this doubt > for a long time and want to ask the experts here. > > > > Suppose I have a very large linear system of equations with 1.5 million > unkowns. It is common to use relative tolerance as a stopping criteria. > > > > For a small linear system, I usually use 1e-6, or 1e-8, or 1e-10, etc. > But for a very large linear system, do I need to use a relative tolerance > much smaller than the previous I use? (Theoretically I think the relative > tolerance has nothing related to system size). > > > > However, something very weird happens. I used 1e-7 as my relative > tolerance for my linear system with 1.5 million unknows using conjugate > gradient method with jacobi preconditioner, the solver can not converge to > 1e-7 with 10,000 iterations. I can use a larger tolerance but the solution > is not good. > > This is not particularly weird. Jacobi preconditioning can perform very > poorly depending on the structure of your matrix. > > So first you need a better preconditioner. Where does the matrix come > from? This helps determine what preconditioner to use. For example, is it a > pressure solve, a structural mechanics problem, a Stokes-like problem, a > fully implicit cdf problem. > > Barry > > > > > Any one have some advices? Thank you very much! > > > > Best regards, > > > > Fangbo Wang > > > > -- > > Fangbo Wang, PhD student > > Stochastic Geomechanics Research Group > > Department of Civil, Structural and Environmental Engineering > > University at Buffalo > > Email: fangbowa at buffalo.edu > > -- Fangbo Wang, PhD student Stochastic Geomechanics Research Group Department of Civil, Structural and Environmental Engineering University at Buffalo Email: *fangbowa at buffalo.edu * -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 14 20:52:37 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 14 Mar 2017 20:52:37 -0500 Subject: [petsc-users] How to determine a reasonable relative tolerance to iteratively solve a linear system of equations? In-Reply-To: References: <746BD51B-0986-4F29-8601-BBAA18637074@mcs.anl.gov> Message-ID: <70DF2E66-E91D-425D-8AEF-49F0F9D56421@mcs.anl.gov> Ok, you should investigate the -pc_type gamg preconditioner and using MatSetNearNullSpace() after creating MatNullSpaceCreateRigidBody(). If everything is working well this should lead to a scalable efficient solver that requires less than, say, 30 iterations. If this doesn't help a great deal send us more email. Barry > On Mar 14, 2017, at 8:46 PM, Fangbo Wang wrote: > > it is a solid mechanics problem. > > On Tue, Mar 14, 2017 at 6:42 PM, Barry Smith wrote: > > > On Mar 14, 2017, at 5:32 PM, Fangbo Wang wrote: > > > > Hi, > > > > I know this is not a problem specific to PETSc, but I have this doubt for a long time and want to ask the experts here. > > > > Suppose I have a very large linear system of equations with 1.5 million unkowns. It is common to use relative tolerance as a stopping criteria. > > > > For a small linear system, I usually use 1e-6, or 1e-8, or 1e-10, etc. But for a very large linear system, do I need to use a relative tolerance much smaller than the previous I use? (Theoretically I think the relative tolerance has nothing related to system size). > > > > However, something very weird happens. I used 1e-7 as my relative tolerance for my linear system with 1.5 million unknows using conjugate gradient method with jacobi preconditioner, the solver can not converge to 1e-7 with 10,000 iterations. I can use a larger tolerance but the solution is not good. > > This is not particularly weird. Jacobi preconditioning can perform very poorly depending on the structure of your matrix. > > So first you need a better preconditioner. Where does the matrix come from? This helps determine what preconditioner to use. For example, is it a pressure solve, a structural mechanics problem, a Stokes-like problem, a fully implicit cdf problem. > > Barry > > > > > Any one have some advices? Thank you very much! > > > > Best regards, > > > > Fangbo Wang > > > > -- > > Fangbo Wang, PhD student > > Stochastic Geomechanics Research Group > > Department of Civil, Structural and Environmental Engineering > > University at Buffalo > > Email: fangbowa at buffalo.edu > > > > > -- > Fangbo Wang, PhD student > Stochastic Geomechanics Research Group > Department of Civil, Structural and Environmental Engineering > University at Buffalo > Email: fangbowa at buffalo.edu From knepley at gmail.com Wed Mar 15 01:09:59 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Mar 2017 01:09:59 -0500 Subject: [petsc-users] How to determine a reasonable relative tolerance to iteratively solve a linear system of equations? In-Reply-To: <746BD51B-0986-4F29-8601-BBAA18637074@mcs.anl.gov> References: <746BD51B-0986-4F29-8601-BBAA18637074@mcs.anl.gov> Message-ID: On Tue, Mar 14, 2017 at 5:42 PM, Barry Smith wrote: > > > On Mar 14, 2017, at 5:32 PM, Fangbo Wang wrote: > > > > Hi, > > > > I know this is not a problem specific to PETSc, but I have this doubt > for a long time and want to ask the experts here. > > > > Suppose I have a very large linear system of equations with 1.5 million > unkowns. It is common to use relative tolerance as a stopping criteria. > > > > For a small linear system, I usually use 1e-6, or 1e-8, or 1e-10, etc. > But for a very large linear system, do I need to use a relative tolerance > much smaller than the previous I use? (Theoretically I think the relative > tolerance has nothing related to system size). > > > > However, something very weird happens. I used 1e-7 as my relative > tolerance for my linear system with 1.5 million unknows using conjugate > gradient method with jacobi preconditioner, the solver can not converge to > 1e-7 with 10,000 iterations. I can use a larger tolerance but the solution > is not good. > > This is not particularly weird. Jacobi preconditioning can perform very > poorly depending on the structure of your matrix. > > So first you need a better preconditioner. Where does the matrix come > from? This helps determine what preconditioner to use. For example, is it a > pressure solve, a structural mechanics problem, a Stokes-like problem, a > fully implicit cdf problem. Simple estimates can be useful for thinking about solvers: 1) Lets say the conditioning of your problem looks like the Laplacian, since I know what that is and since I believe elasticity does look like this. The condition number grows as h^{-2} kappa = C h^{-2} 2) Using CG to solve a system to a given relative tolerance takes about sqrt(kappa) iterations 3) I do not think Jacobi contributes to asymptotics, just the constant 4) Overall, that means it will take C h^{-1} iterations to solve your system. As you refine, the number of iterations goes up until you jsut cannot solve it anymore. This is exactly what is meant by a non-scalable solver, and as Barry point out, for this problem MG provides a scalable solver. A lot of times, MG is hard to tune correctly so we use something to handle a few problematic modes in the problem for which MG as not tuned correctly, and that would be the Krylov method. Thanks, Matt > > Barry > > > > > Any one have some advices? Thank you very much! > > > > Best regards, > > > > Fangbo Wang > > > > -- > > Fangbo Wang, PhD student > > Stochastic Geomechanics Research Group > > Department of Civil, Structural and Environmental Engineering > > University at Buffalo > > Email: fangbowa at buffalo.edu > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Mar 15 08:19:35 2017 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 15 Mar 2017 09:19:35 -0400 Subject: [petsc-users] How to determine a reasonable relative tolerance to iteratively solve a linear system of equations? In-Reply-To: References: Message-ID: > For a small linear system, I usually use 1e-6, or 1e-8, or 1e-10, etc. But > for a very large linear system, do I need to use a relative tolerance much > smaller than the previous I use? (Theoretically I think the relative > tolerance has nothing related to system size). > I agree with the other responses and let me add that 1) you kinda do want to reduce your tolerance as you refine, but nobody does, and 2) a properly working MG reduces the entire spectrum of the error pretty uniformly, whereas (damped) Jacobi reduces the high frequency error fast and is slow on the low frequency error. So you could find that, with a constant rtol, your solution goes bad as you refine, whereas MG will stay good. From egorow.walentin at gmail.com Thu Mar 16 00:37:41 2017 From: egorow.walentin at gmail.com (=?UTF-8?B?0JLQsNC70LXQvdGC0LjQvSDQldCz0L7RgNC+0LI=?=) Date: Thu, 16 Mar 2017 08:37:41 +0300 Subject: [petsc-users] Question about PETSC Message-ID: Hello! My name is Valentin Egorov. I am from Russia. And I have a question for you about PETSC. I would like to make a programm on Fortran with PETSC, but I can't. I have a matrix 400*400. I have also vector B with 400 elements. I need to solve linear equations. Could you help me to do it. I can't understand how use PETSC in Fortran? In fortran programm? And where to put matrix elements? Sincerely, Valentin Egorov! -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 16 02:16:04 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 16 Mar 2017 02:16:04 -0500 Subject: [petsc-users] Question about PETSC In-Reply-To: References: Message-ID: Hi Valentin, Have you seen this example: https://bitbucket.org/petsc/petsc/src/1830d94e4628b31f970259df1d58bc250c9af32a/src/ksp/ksp/examples/tutorials/ex2f.F?at=master&fileviewer=file-view-default Would that be enough to get started? Thanks, Matt On Thu, Mar 16, 2017 at 12:37 AM, ???????? ?????? wrote: > Hello! > My name is Valentin Egorov. I am from Russia. And I have a question for > you about PETSC. I would like to make a programm on Fortran with PETSC, but > I can't. I have a matrix 400*400. I have also vector B with 400 elements. I > need to solve linear equations. Could you help me to do it. I can't > understand how use PETSC in Fortran? In fortran programm? And where to put > matrix elements? > > Sincerely, Valentin Egorov! > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Thu Mar 16 02:37:01 2017 From: dave.mayhem23 at gmail.com (Dave May) Date: Thu, 16 Mar 2017 07:37:01 +0000 Subject: [petsc-users] Question about PETSC In-Reply-To: References: Message-ID: On Thu, 16 Mar 2017 at 07:16, Matthew Knepley wrote: > Hi Valentin, > > Have you seen this example: > https://bitbucket.org/petsc/petsc/src/1830d94e4628b31f970259df1d58bc250c9af32a/src/ksp/ksp/examples/tutorials/ex2f.F?at=master&fileviewer=file-view-default > > Would that be enough to get started? > Matt is correct. The best way to get into PETSc is by studying the example codes provided in the source tree. As a precursor to studying fortran examples, you should take a look at this page and decided which PETSc-fortran approach you wish to use: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html Thanks, Dave > Thanks, > > Matt > > On Thu, Mar 16, 2017 at 12:37 AM, ???????? ?????? < > egorow.walentin at gmail.com> wrote: > > Hello! > My name is Valentin Egorov. I am from Russia. And I have a question for > you about PETSC. I would like to make a programm on Fortran with PETSC, but > I can't. I have a matrix 400*400. I have also vector B with 400 elements. I > need to solve linear equations. Could you help me to do it. I can't > understand how use PETSC in Fortran? In fortran programm? And where to put > matrix elements? > > Sincerely, Valentin Egorov! > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From egorow.walentin at gmail.com Thu Mar 16 04:56:51 2017 From: egorow.walentin at gmail.com (=?UTF-8?B?0JLQsNC70LXQvdGC0LjQvSDQldCz0L7RgNC+0LI=?=) Date: Thu, 16 Mar 2017 12:56:51 +0300 Subject: [petsc-users] Question about PETSC In-Reply-To: References: Message-ID: Thank you for answering, I will try. Sincerely, Valentin 2017-03-16 10:37 GMT+03:00 Dave May : > > > On Thu, 16 Mar 2017 at 07:16, Matthew Knepley wrote: > >> Hi Valentin, >> >> Have you seen this example: https://bitbucket.org/petsc/petsc/src/ >> 1830d94e4628b31f970259df1d58bc250c9af32a/src/ksp/ksp/ >> examples/tutorials/ex2f.F?at=master&fileviewer=file-view-default >> >> Would that be enough to get started? >> > > Matt is correct. The best way to get into PETSc is by studying the example > codes provided in the source tree. > > As a precursor to studying fortran examples, you should take a look at > this page and decided which PETSc-fortran approach you wish to use: > > http://www.mcs.anl.gov/petsc/petsc-current/docs/ > manualpages/Sys/UsingFortran.html > > Thanks, > Dave > > >> Thanks, >> >> Matt >> >> On Thu, Mar 16, 2017 at 12:37 AM, ???????? ?????? < >> egorow.walentin at gmail.com> wrote: >> >> Hello! >> My name is Valentin Egorov. I am from Russia. And I have a question for >> you about PETSC. I would like to make a programm on Fortran with PETSC, but >> I can't. I have a matrix 400*400. I have also vector B with 400 elements. I >> need to solve linear equations. Could you help me to do it. I can't >> understand how use PETSC in Fortran? In fortran programm? And where to put >> matrix elements? >> >> Sincerely, Valentin Egorov! >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aero.aju at gmail.com Thu Mar 16 06:27:45 2017 From: aero.aju at gmail.com (Ajit Desai) Date: Thu, 16 Mar 2017 07:27:45 -0400 Subject: [petsc-users] Understanding PETSc -log_summary Message-ID: Hello Everyone, A couple of questions on the *-log_summary* provided by PETSc. 1. *Avg-Flops & Avg-Flops/sec* are averaged among the participating cores or averaged over the simulation time or both? 2. *Max/Min Flops & Flops/sec* is an indication of load balancing? In simulation-2 these ratios are high compared to simulation-1. Does that mean simulation-2 is not well balanced? Please follow the outputs from two different simulations (Note: the problem size and the number of processors used are different). *Simulation-1* Max Max/Min Avg Total Time (sec): 4.208e+02 1.00005 4.208e+02 Objects: 7.100e+01 1.00000 7.100e+01 Flops: 3.326e+11 1.31175 3.017e+11 4.826e+13 Flops/sec: 7.904e+08 1.31175 7.169e+08 1.147e+11 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 *SImulation-2:* Max Max/Min Avg Total Time (sec): 8.434e+02 1.00000 8.434e+02 Objects: 7.300e+01 1.02817 7.102e+01 Flops: 6.555e+11 1.85115 5.798e+11 3.711e+14 Flops/sec: 7.772e+08 1.85115 6.874e+08 4.400e+11 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 We're trying to understand the performance of our solver using these outputs. Any comments in relation to that will be helpful. Thanks & Regards, *Ajit Desai* PhD Scholar, Carleton University, Canada -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Mar 16 07:00:25 2017 From: jed at jedbrown.org (Jed Brown) Date: Thu, 16 Mar 2017 06:00:25 -0600 Subject: [petsc-users] Understanding PETSc -log_summary In-Reply-To: References: Message-ID: <87pohho2hy.fsf@jedbrown.org> Ajit Desai writes: > Hello Everyone, > A couple of questions on the *-log_summary* provided by PETSc. -log_view is the preferred name. > 1. *Avg-Flops & Avg-Flops/sec* are averaged among the participating cores > or averaged over the simulation time or both? Flops on each process is just the count (as logged by PetscLogFlops, which PETSc numerical functions call and you can too) over the time between PetscInitialize until the log is printed (usually PetscFinalize). The time on each process is the wall clock time from PetscInitialize until the log is printed. The Average, Max, Min, and total are over processes. > 2. *Max/Min Flops & Flops/sec* is an indication of load balancing? > In simulation-2 these ratios are high compared to simulation-1. Does that > mean simulation-2 is not well balanced? Yes, not balanced in terms of flops (correlated with time, but not a measure of time). Look at the events to help see where. > Please follow the outputs from two different simulations > (Note: the problem size and the number of processors used are different). > > *Simulation-1* > Max Max/Min Avg > Total > Time (sec): 4.208e+02 1.00005 4.208e+02 > Objects: 7.100e+01 1.00000 7.100e+01 > Flops: 3.326e+11 1.31175 3.017e+11 4.826e+13 > Flops/sec: 7.904e+08 1.31175 7.169e+08 1.147e+11 > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > > > *SImulation-2:* > Max Max/Min Avg > Total > Time (sec): 8.434e+02 1.00000 8.434e+02 > Objects: 7.300e+01 1.02817 7.102e+01 > Flops: 6.555e+11 1.85115 5.798e+11 3.711e+14 > Flops/sec: 7.772e+08 1.85115 6.874e+08 4.400e+11 > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > > We're trying to understand the performance of our solver using these > outputs. > Any comments in relation to that will be helpful. > > Thanks & Regards, > *Ajit Desai* > PhD Scholar, Carleton University, Canada -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From aero.aju at gmail.com Thu Mar 16 10:11:32 2017 From: aero.aju at gmail.com (Ajit Desai) Date: Thu, 16 Mar 2017 11:11:32 -0400 Subject: [petsc-users] Understanding PETSc -log_summary In-Reply-To: <87pohho2hy.fsf@jedbrown.org> References: <87pohho2hy.fsf@jedbrown.org> Message-ID: Thanks, Jed. That is helpful. *Ajit Desai* PhD Scholar, Carleton University, Canada On Thu, Mar 16, 2017 at 8:00 AM, Jed Brown wrote: > Ajit Desai writes: > > > Hello Everyone, > > A couple of questions on the *-log_summary* provided by PETSc. > > -log_view is the preferred name. > > > 1. *Avg-Flops & Avg-Flops/sec* are averaged among the participating cores > > or averaged over the simulation time or both? > > Flops on each process is just the count (as logged by PetscLogFlops, > which PETSc numerical functions call and you can too) over the time > between PetscInitialize until the log is printed (usually > PetscFinalize). The time on each process is the wall clock time from > PetscInitialize until the log is printed. The Average, Max, Min, and > total are over processes. > > > 2. *Max/Min Flops & Flops/sec* is an indication of load balancing? > > In simulation-2 these ratios are high compared to simulation-1. Does that > > mean simulation-2 is not well balanced? > > Yes, not balanced in terms of flops (correlated with time, but not a > measure of time). Look at the events to help see where. > > > Please follow the outputs from two different simulations > > (Note: the problem size and the number of processors used are different). > > > > *Simulation-1* > > Max Max/Min Avg > > Total > > Time (sec): 4.208e+02 1.00005 4.208e+02 > > Objects: 7.100e+01 1.00000 7.100e+01 > > Flops: 3.326e+11 1.31175 3.017e+11 4.826e+13 > > Flops/sec: 7.904e+08 1.31175 7.169e+08 1.147e+11 > > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > > > > > > *SImulation-2:* > > Max Max/Min Avg > > Total > > Time (sec): 8.434e+02 1.00000 8.434e+02 > > Objects: 7.300e+01 1.02817 7.102e+01 > > Flops: 6.555e+11 1.85115 5.798e+11 3.711e+14 > > Flops/sec: 7.772e+08 1.85115 6.874e+08 4.400e+11 > > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > > > > We're trying to understand the performance of our solver using these > > outputs. > > Any comments in relation to that will be helpful. > > > > Thanks & Regards, > > *Ajit Desai* > > PhD Scholar, Carleton University, Canada > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Mar 16 16:45:31 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 16 Mar 2017 16:45:31 -0500 Subject: [petsc-users] Question about PETSC In-Reply-To: References: Message-ID: <36D7062D-172E-49DD-B2A6-B0D04E70C9B7@mcs.anl.gov> > On Mar 16, 2017, at 4:56 AM, ???????? ?????? wrote: > > Thank you for answering, I will try. > Sincerely, > Valentin > > Matt is correct. The best way to get into PETSc is by studying the example codes provided in the source tree. > > As a precursor to studying fortran examples, you should take a look at this page and decided which PETSc-fortran approach you wish to use: > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html Since you are just starting with PETSc I highly recommend you DO NOT read the above since we have vastly simplified the usage of PETSc from Fortran. You should read http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Sys/UsingFortran.html and download PETSc via git clone -b master https://bitbucket.org/petsc/petsc petsc and look at the documentation at http://www.mcs.anl.gov/petsc/petsc-master/docs/index.html > > Thanks, > Dave > > > Thanks, > > Matt > > On Thu, Mar 16, 2017 at 12:37 AM, ???????? ?????? wrote: > Hello! > My name is Valentin Egorov. I am from Russia. And I have a question for you about PETSC. I would like to make a programm on Fortran with PETSC, but I can't. I have a matrix 400*400. I have also vector B with 400 elements. I need to solve linear equations. Could you help me to do it. I can't understand how use PETSC in Fortran? In fortran programm? And where to put matrix elements? > > Sincerely, Valentin Egorov! > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > From bsmith at mcs.anl.gov Thu Mar 16 19:49:50 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 16 Mar 2017 19:49:50 -0500 Subject: [petsc-users] Understanding PETSc -log_summary In-Reply-To: References: <87pohho2hy.fsf@jedbrown.org> Message-ID: Also this information at the top is often not particularly useful because it includes start up time and any IO etc. We recommend looking that the events below. You can start with the line for TSSolve, or SNESSolve, or KSPSolve depending on if you are using TS, SNES, or KSP. There are similar "load balance" information in each row. Barry > On Mar 16, 2017, at 10:11 AM, Ajit Desai wrote: > > Thanks, Jed. > That is helpful. > > Ajit Desai > PhD Scholar, Carleton University, Canada > > On Thu, Mar 16, 2017 at 8:00 AM, Jed Brown wrote: > Ajit Desai writes: > > > Hello Everyone, > > A couple of questions on the *-log_summary* provided by PETSc. > > -log_view is the preferred name. > > > 1. *Avg-Flops & Avg-Flops/sec* are averaged among the participating cores > > or averaged over the simulation time or both? > > Flops on each process is just the count (as logged by PetscLogFlops, > which PETSc numerical functions call and you can too) over the time > between PetscInitialize until the log is printed (usually > PetscFinalize). The time on each process is the wall clock time from > PetscInitialize until the log is printed. The Average, Max, Min, and > total are over processes. > > > 2. *Max/Min Flops & Flops/sec* is an indication of load balancing? > > In simulation-2 these ratios are high compared to simulation-1. Does that > > mean simulation-2 is not well balanced? > > Yes, not balanced in terms of flops (correlated with time, but not a > measure of time). Look at the events to help see where. > > > Please follow the outputs from two different simulations > > (Note: the problem size and the number of processors used are different). > > > > *Simulation-1* > > Max Max/Min Avg > > Total > > Time (sec): 4.208e+02 1.00005 4.208e+02 > > Objects: 7.100e+01 1.00000 7.100e+01 > > Flops: 3.326e+11 1.31175 3.017e+11 4.826e+13 > > Flops/sec: 7.904e+08 1.31175 7.169e+08 1.147e+11 > > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > > > > > > *SImulation-2:* > > Max Max/Min Avg > > Total > > Time (sec): 8.434e+02 1.00000 8.434e+02 > > Objects: 7.300e+01 1.02817 7.102e+01 > > Flops: 6.555e+11 1.85115 5.798e+11 3.711e+14 > > Flops/sec: 7.772e+08 1.85115 6.874e+08 4.400e+11 > > MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 > > > > We're trying to understand the performance of our solver using these > > outputs. > > Any comments in relation to that will be helpful. > > > > Thanks & Regards, > > *Ajit Desai* > > PhD Scholar, Carleton University, Canada > From cpraveen at gmail.com Fri Mar 17 00:02:23 2017 From: cpraveen at gmail.com (Praveen C) Date: Fri, 17 Mar 2017 10:32:23 +0530 Subject: [petsc-users] Fortran application context, passing a module Message-ID: Dear all I want to pass my own module as an application context in fortran. So I am trying something like this type(mgrid),target :: g PetscFortranAddr :: ctx(6) ctx(1) => g But this gives an error *Error:* Non-POINTER in pointer association context (pointer assignment) at (1) Could you tell me how I can make this work ? Thanks praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Mar 17 00:15:50 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 17 Mar 2017 00:15:50 -0500 Subject: [petsc-users] Fortran application context, passing a module In-Reply-To: References: Message-ID: It looks to me that you are trying to pass a Fortran 90 derived type as one entry in an array of addresses as an application context (perhaps from some PETSc example). This will not work as written because Fortran doesn't know that PetscFortranAddr is an array of addresses. The conventional way to do this is to pass a derived type directly as the context. So you would just have type(mgrid) mygrid then when you call SNESSetComputeFunction() or whatever routine you are calling that requires a context SNESSetComputeFunction(snes,v,computefunction,mygrid,ierr) and inside your computefunction the ctx argument is declared as type(grid) mygrid See, for example, src/snes/examples/tutorials/ex5f90.F90 Barry > type(mgrid) > On Mar 17, 2017, at 12:02 AM, Praveen C wrote: > > Dear all > > I want to pass my own module as an application context in fortran. So I am trying something like this > > type(mgrid),target :: g > PetscFortranAddr :: ctx(6) > ctx(1) => g > > But this gives an error > Error: Non-POINTER in pointer association context (pointer assignment) at (1) > > Could you tell me how I can make this work ? > > Thanks > praveen From Sander.Arens at ugent.be Fri Mar 17 06:02:57 2017 From: Sander.Arens at ugent.be (Sander Arens) Date: Fri, 17 Mar 2017 12:02:57 +0100 Subject: [petsc-users] Problems imposing boundary conditions In-Reply-To: References: <5F56CB15-5C1E-4C8F-83A5-F0BA7B9F4D13@gmail.com> <69E95D27-72CD-4BFA-AC62-907727998DCB@gmail.com> <3383F285-34C3-48A2-BF7C-56ED80FDF236@gmail.com> Message-ID: I think there might still be another problem, this time it has to do with DMPlexInsertBoundaryValues. Say we have a simple mesh like this: 4 ---- 5 | \ 1 | | \ | | 0 \ | 2 ---- 3 and we constrain the left face to be 0 and the right face to be 1. The way it is now, DMPlexInsertBoundaryValues would first insert a 0 at point 2, 3 and 4 and then insert a 1 at point 3, 4 and 5. This is obviously not what we want, because we don't want these boundary conditions to overwrite each other. I think the easiest fix for this would be to make the following changes here: remove the calls to DMPlexLabelAdd/RemoveCells and set the maximum projection height to dim. But maybe someone (Matt) has a better solution? Thanks, Sander On 14 March 2017 at 16:17, Sander Arens wrote: > I updated my pull request with a fix. If Matt approves and merges to > master, the problem should be solved. > > Thanks, > Sander > > On 9 March 2017 at 15:00, Matthew Knepley wrote: > >> On Thu, Mar 9, 2017 at 7:45 AM, Maximilian Hartig < >> imilian.hartig at gmail.com> wrote: >> >>> Ok thank you, so can I just do something along the lines of: >>> PetscSectionGetFieldConstraintDof(?) >>> to find the overconstrained vertices and then correct them manually with >>> PetscSectionSetFieldConstraintDof() >>> PetscSectionSetFieldConstraintIndices() >>> ? >>> >> >> You can do exactly that. And that is the same thing I would do, although >> I would probably go to >> the place where they are being input >> >> https://bitbucket.org/petsc/petsc/src/79f3641cdf8f54d0fc7a >> 5ae1e04e0887d8c00e9b/src/dm/impls/plex/plex.c?at=master& >> fileviewer=file-view-default#plex.c-3164 >> >> and try to filter them out. However, its a little tricky since the space >> has already been allocated and will >> have to be adjusted. It will likely take a more thorough going rewrite to >> do it. >> >> Or will this mess up the Jacobian and the iFunction? >>> >> >> Section creation is independent of these. This allows a user to do >> whatever they want here instead of >> using my default mechanisms. >> >> Thanks, >> >> Matt >> >> >>> Thanks, >>> Max >>> >>> >>> On 7 Mar 2017, at 18:21, Matthew Knepley wrote: >>> >>> On Tue, Mar 7, 2017 at 11:11 AM, Maximilian Hartig < >>> imilian.hartig at gmail.com> wrote: >>> >>>> >>>> On 7 Mar 2017, at 16:29, Matthew Knepley wrote: >>>> >>>> On Tue, Mar 7, 2017 at 3:28 AM, Maximilian Hartig < >>>> imilian.hartig at gmail.com> wrote: >>>> >>>>> It seems you are correct. In theory, the problem should not be over >>>>> constrained. It is 1/4 of a simple hollow cylinder geometry with rotational >>>>> symmetry around the z-axis. I restrict movement completely on the upper and >>>>> lower (z) end as well as movement in x- and y- direction respectively on >>>>> the symmetry planes. >>>>> I am not completely sure what I am looking at with the output of >>>>> -dm_petscsection_view. But these lines struck me as odd: >>>>> >>>>> >>>>> (5167) dim 3 offset 0 constrained 0 1 1 2 >>>>> (5168) dim 3 offset 6 constrained 0 1 1 2 >>>>> . >>>>> . >>>>> . >>>>> (5262) dim 3 offset 0 constrained 0 0 1 2 >>>>> (5263) dim 3 offset 6 constrained 0 0 1 2 >>>>> >>>>> >>>>> It seems that vertices that are part of the closures of both Face Sets >>>>> get restricted twice in their respective degree of freedom. >>>>> >>>> >>>> Yes, that is exactly what happens. >>>> >>>> >>>>> This does however also happen when restricting movement in x- >>>>> direction only for upper and lower faces. In that case without the solver >>>>> producing an error: >>>>> (20770) dim 3 offset 24 constrained 0 0 >>>>> (20771) dim 3 offset 30 constrained 0 0 >>>>> (20772) dim 3 offset 36 constrained 0 0 >>>>> (20773) dim 3 offset 42 constrained 0 0 >>>>> >>>> >>>> The fact that this does not SEGV is just luck. >>>> >>>> Now, I did not put in any guard against this because I was not sure >>>> what should happen. We could error if a local index is repeated, or >>>> we could ignore it. This seems unsafe if you try to constrain it with >>>> two different values, but there is no way for me to tell if the values are >>>> compatible. Thus I just believed whatever the user told me. >>>> >>>> >>>> What is the intention here? It would be straightforward to ignore >>>> duplicates I guess. >>>> >>>> Yes, ignoring duplicates would solve the problem then. I can think of >>>> no example where imposing two different Dirichlet BC on the same DOF of the >>>> same vertex would make sense (I might be wrong of course). That means the >>>> only issue is to determine wether the first or the second BC is the correct >>>> one to be imposed. >>>> I don?t know how I could filter out the vertices in question from the >>>> Label. I use GMSH to construct my meshes and could create a label for the >>>> edges without too much effort. But I cannot see an easy way to exclude them >>>> when imposing the BC. >>>> I tried to figure out where PETSC actually imposes the BC but got lost >>>> a bit in the source. Could you kindly point me towards the location? >>>> >>> >>> It is in stages. >>> >>> 1) You make a structure with AddBoundary() that has a Label and function >>> for boundary values >>> >>> 2) The PetscSection gets created with stores which points have >>> constraints and which components they affect >>> >>> 3) When global Vecs are made, these constraints are left out >>> >>> 4) When local Vecs are made, they are left in >>> >>> 5) DMPlexInsertBoundaryValues() is called on local Vecs, and puts in the >>> values from your functions. This usually happens >>> when you copy the solutions values from the global Vec to a local >>> Vec to being assembly. >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> Thanks, >>>> Max >>>> >>>> >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> Thanks, >>>>> Max >>>>> >>>>> On 6 Mar 2017, at 14:43, Matthew Knepley wrote: >>>>> >>>>> On Mon, Mar 6, 2017 at 8:38 AM, Maximilian Hartig < >>>>> imilian.hartig at gmail.com> wrote: >>>>> >>>>>> Of course, please find the source as well as the mesh attached below. >>>>>> I run with: >>>>>> >>>>>> -def_petscspace_order 2 -vel_petscspace_order 2 -snes_monitor >>>>>> -snes_converged_reason -ksp_converged_reason -ksp_monitor _true_residual >>>>>> -ksp_type fgmres -pc_type sor >>>>>> >>>>> >>>>> This sounds like over-constraining a point to me. I will try and run >>>>> it soon, but I have a full schedule this week. The easiest >>>>> way to see if this is happening should be to print out the Section >>>>> that gets made >>>>> >>>>> -dm_petscsection_view >>>>> >>>>> Thanks, >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thanks, >>>>>> Max >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On 4 Mar 2017, at 11:34, Sander Arens wrote: >>>>>> >>>>>> Hmm, strange you also get the error in serial. Can you maybe send a >>>>>> minimal working which demonstrates the error? >>>>>> >>>>>> Thanks, >>>>>> Sander >>>>>> >>>>>> On 3 March 2017 at 23:07, Maximilian Hartig >>>>> > wrote: >>>>>> >>>>>>> Yes Sander, your assessment is correct. I use DMPlex and specify the >>>>>>> BC using DMLabel. I do however get this error also when running in serial. >>>>>>> >>>>>>> Thanks, >>>>>>> Max >>>>>>> >>>>>>> On 3 Mar 2017, at 22:14, Matthew Knepley wrote: >>>>>>> >>>>>>> On Fri, Mar 3, 2017 at 12:56 PM, Sander Arens >>>>>> > wrote: >>>>>>> >>>>>>>> Max, >>>>>>>> >>>>>>>> I'm assuming you use DMPlex for your mesh? If so, did you only >>>>>>>> specify the faces in the DMLabel (and not vertices or edges). Do you get >>>>>>>> this error only in parallel? >>>>>>>> >>>>>>>> If so, I can confirm this bug. I submitted a pull request for this >>>>>>>> yesterday. >>>>>>>> >>>>>>> >>>>>>> Yep, I saw Sander's pull request. I will get in merged in tomorrow >>>>>>> when I get home to Houston. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Matt >>>>>>> >>>>>>> >>>>>>>> On 3 March 2017 at 18:43, Lukas van de Wiel < >>>>>>>> lukas.drinkt.thee at gmail.com> wrote: >>>>>>>> >>>>>>>>> You have apparently preallocated the non-zeroes of you matrix, and >>>>>>>>> the room was insufficient to accommodate all your equations. >>>>>>>>> >>>>>>>>> What happened after you tried: >>>>>>>>> >>>>>>>>> MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>>>>>>>> >>>>>>>>> >>>>>>>>> Cheers >>>>>>>>> Lukas >>>>>>>>> >>>>>>>>> On Fri, Mar 3, 2017 at 6:37 PM, Maximilian Hartig < >>>>>>>>> imilian.hartig at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hello, >>>>>>>>>> >>>>>>>>>> I am working on a transient structural FEM code with PETSc. I >>>>>>>>>> managed to create a slow but functioning program with the use of petscFE >>>>>>>>>> and a TS solver. The code runs fine until I try to restrict movement in all >>>>>>>>>> three spatial directions for one face. I then get the error which is >>>>>>>>>> attached below. >>>>>>>>>> So apparently DMPlexMatSetClosure tries to write/read beyond what >>>>>>>>>> was priorly allocated. I do however not call MatSeqAIJSetPreallocation >>>>>>>>>> myself in the code. So I?m unsure where to start looking for the bug. In my >>>>>>>>>> understanding, PETSc should know from the DM how much space to allocate. >>>>>>>>>> Could you kindly give me a hint? >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> >>>>>>>>>> Max >>>>>>>>>> >>>>>>>>>> 0 SNES Function norm 2.508668036663e-06 >>>>>>>>>> [0]PETSC ERROR: --------------------- Error Message >>>>>>>>>> -------------------------------------------------------------- >>>>>>>>>> [0]PETSC ERROR: Argument out of range >>>>>>>>>> [0]PETSC ERROR: New nonzero at (41754,5) caused a malloc >>>>>>>>>> Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) >>>>>>>>>> to turn off this check >>>>>>>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/pet >>>>>>>>>> sc/documentation/faq.html for trouble shooting. >>>>>>>>>> [0]PETSC ERROR: Petsc Development GIT revision: >>>>>>>>>> v3.7.5-3223-g99077fc GIT Date: 2017-02-28 13:41:43 -0600 >>>>>>>>>> [0]PETSC ERROR: ./S3 on a arch-linux-gnu-intel named XXXXXXX by >>>>>>>>>> hartig Fri Mar 3 17:55:57 2017 >>>>>>>>>> [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-gnu-intel >>>>>>>>>> --with-cc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicc >>>>>>>>>> --with-cxx=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiicpc >>>>>>>>>> --with-fc=/opt/intel/compilers_and_libraries/linux/mpi/intel64/bin/mpiifort >>>>>>>>>> --download-ml >>>>>>>>>> [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 455 in >>>>>>>>>> /home/hartig/petsc/src/mat/impls/aij/seq/aij.c >>>>>>>>>> [0]PETSC ERROR: #2 MatSetValues() line 1270 in >>>>>>>>>> /home/hartig/petsc/src/mat/interface/matrix.c >>>>>>>>>> [0]PETSC ERROR: [0]ERROR in DMPlexMatSetClosure >>>>>>>>>> [0]mat for sieve point 60 >>>>>>>>>> [0]mat row indices[0] = 41754 >>>>>>>>>> [0]mat row indices[1] = 41755 >>>>>>>>>> [0]mat row indices[2] = 41756 >>>>>>>>>> [0]mat row indices[3] = 41760 >>>>>>>>>> [0]mat row indices[4] = 41761 >>>>>>>>>> [0]mat row indices[5] = 41762 >>>>>>>>>> [0]mat row indices[6] = 41766 >>>>>>>>>> [0]mat row indices[7] = -41768 >>>>>>>>>> [0]mat row indices[8] = 41767 >>>>>>>>>> [0]mat row indices[9] = 41771 >>>>>>>>>> [0]mat row indices[10] = -41773 >>>>>>>>>> [0]mat row indices[11] = 41772 >>>>>>>>>> [0]mat row indices[12] = 41776 >>>>>>>>>> [0]mat row indices[13] = 41777 >>>>>>>>>> [0]mat row indices[14] = 41778 >>>>>>>>>> [0]mat row indices[15] = 41782 >>>>>>>>>> [0]mat row indices[16] = -41784 >>>>>>>>>> [0]mat row indices[17] = 41783 >>>>>>>>>> [0]mat row indices[18] = 261 >>>>>>>>>> [0]mat row indices[19] = -263 >>>>>>>>>> [0]mat row indices[20] = 262 >>>>>>>>>> [0]mat row indices[21] = 24318 >>>>>>>>>> [0]mat row indices[22] = 24319 >>>>>>>>>> [0]mat row indices[23] = 24320 >>>>>>>>>> [0]mat row indices[24] = -7 >>>>>>>>>> [0]mat row indices[25] = -8 >>>>>>>>>> [0]mat row indices[26] = 6 >>>>>>>>>> [0]mat row indices[27] = 1630 >>>>>>>>>> [0]mat row indices[28] = -1632 >>>>>>>>>> [0]mat row indices[29] = 1631 >>>>>>>>>> [0]mat row indices[30] = 41757 >>>>>>>>>> [0]mat row indices[31] = 41758 >>>>>>>>>> [0]mat row indices[32] = 41759 >>>>>>>>>> [0]mat row indices[33] = 41763 >>>>>>>>>> [0]mat row indices[34] = 41764 >>>>>>>>>> [0]mat row indices[35] = 41765 >>>>>>>>>> [0]mat row indices[36] = 41768 >>>>>>>>>> [0]mat row indices[37] = 41769 >>>>>>>>>> [0]mat row indices[38] = 41770 >>>>>>>>>> [0]mat row indices[39] = 41773 >>>>>>>>>> [0]mat row indices[40] = 41774 >>>>>>>>>> [0]mat row indices[41] = 41775 >>>>>>>>>> [0]mat row indices[42] = 41779 >>>>>>>>>> [0]mat row indices[43] = 41780 >>>>>>>>>> [0]mat row indices[44] = 41781 >>>>>>>>>> [0]mat row indices[45] = 41784 >>>>>>>>>> [0]mat row indices[46] = 41785 >>>>>>>>>> [0]mat row indices[47] = 41786 >>>>>>>>>> [0]mat row indices[48] = 263 >>>>>>>>>> [0]mat row indices[49] = 264 >>>>>>>>>> [0]mat row indices[50] = 265 >>>>>>>>>> [0]mat row indices[51] = 24321 >>>>>>>>>> [0]mat row indices[52] = 24322 >>>>>>>>>> [0]mat row indices[53] = 24323 >>>>>>>>>> [0]mat row indices[54] = 5 >>>>>>>>>> [0]mat row indices[55] = 6 >>>>>>>>>> [0]mat row indices[56] = 7 >>>>>>>>>> [0]mat row indices[57] = 1632 >>>>>>>>>> [0]mat row indices[58] = 1633 >>>>>>>>>> [0]mat row indices[59] = 1634 >>>>>>>>>> [0] 1.29801 0.0998428 -0.275225 1.18171 -0.0093323 0.055045 >>>>>>>>>> 1.18146 0.00525527 -0.11009 -0.588378 0.264666 -0.0275225 -2.39586 >>>>>>>>>> -0.210511 0.22018 -0.621071 0.0500786 0.137613 -0.180869 -0.0974804 >>>>>>>>>> 0.0344031 -0.0302673 -0.09 0. -0.145175 -0.00383346 -0.00688063 0.300442 >>>>>>>>>> -0.00868618 -0.0275225 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. >>>>>>>>>> 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>>> [0] 0.0998428 0.590663 -0.0320009 0.0270594 0.360043 0.0297282 >>>>>>>>>> -0.0965489 0.270936 -0.0652389 0.32647 -0.171351 -0.00845384 -0.206902 >>>>>>>>>> -0.657189 0.0279137 0.0500786 -0.197561 -0.0160508 -0.12748 -0.138996 >>>>>>>>>> 0.0408591 -0.06 -0.105935 0.0192308 0.00161757 -0.0361182 0.0042968 >>>>>>>>>> -0.0141372 0.0855084 -0.000284088 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. >>>>>>>>>> [0] -0.275225 -0.0320009 0.527521 -0.055045 0.0285918 0.234796 >>>>>>>>>> -0.165135 -0.0658071 0.322754 0.0275225 -0.0207062 -0.114921 0.33027 >>>>>>>>>> 0.0418706 -0.678455 0.137613 -0.0160508 -0.235826 0.0344031 0.0312437 >>>>>>>>>> -0.0845583 0. 0.0288462 -0.0302673 0.00688063 0.00443884 -0.0268103 >>>>>>>>>> -0.0412838 -0.000426133 0.0857668 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 >>>>>>>>>> [0] 1.18171 0.0270594 -0.055045 1.29651 -0.0821157 0.275225 >>>>>>>>>> 1.1468 -0.0675282 0.11009 -0.637271 0.141058 -0.137613 -2.37741 -0.0649437 >>>>>>>>>> -0.22018 -0.588378 0.24647 0.0275225 -0.140937 -0.00838243 0.00688063 >>>>>>>>>> -0.0175533 -0.09 0. -0.16373 -0.0747355 -0.0344031 0.300255 -0.026882 >>>>>>>>>> 0.0275225 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 >>>>>>>>>> 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>>> [0] -0.0093323 0.360043 0.0285918 -0.0821157 0.585404 -0.0263191 >>>>>>>>>> -0.205724 0.149643 0.0652389 0.141058 -0.254263 0.0452109 0.011448 >>>>>>>>>> -0.592598 -0.0279137 0.344666 -0.171351 -0.0207062 0.00616654 -0.0212853 >>>>>>>>>> -0.0115868 -0.06 -0.0614365 -0.0192308 -0.104736 -0.0790071 -0.0335691 >>>>>>>>>> -0.041431 0.084851 0.000284088 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>>>>> [0] 0.055045 0.0297282 0.234796 0.275225 -0.0263191 0.526019 >>>>>>>>>> 0.165135 0.0658071 0.288099 -0.137613 0.0452109 -0.252027 -0.33027 >>>>>>>>>> -0.0418706 -0.660001 -0.0275225 -0.00845384 -0.114921 -0.00688063 >>>>>>>>>> -0.0117288 -0.0225723 0. -0.0288462 -0.0175533 -0.0344031 -0.0239537 >>>>>>>>>> -0.0674185 0.0412838 0.000426133 0.085579 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.56483e-11 >>>>>>>>>> [0] 1.18146 -0.0965489 -0.165135 1.1468 -0.205724 0.165135 >>>>>>>>>> 3.70665 0.626591 3.1198e-14 -2.37741 0.0332522 -0.275225 -2.44501 -0.417727 >>>>>>>>>> 4.66207e-17 -2.39586 -0.148706 0.275225 0.283843 0.0476669 0.0137613 >>>>>>>>>> 0.00972927 0.06 0. 0.288268 0.0567649 -0.0137613 0.601523 0.0444318 >>>>>>>>>> -1.2387e-17 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>>> [0] 0.00525527 0.270936 -0.0658071 -0.0675282 0.149643 0.0658071 >>>>>>>>>> 0.626591 1.29259 -0.02916 -0.0867478 -0.592598 -0.0413024 -0.417727 >>>>>>>>>> -0.829208 6.46318e-18 -0.268706 -0.657189 0.0413024 0.03402 0.0715157 >>>>>>>>>> 0.0179272 0.04 0.0340524 1.77708e-18 0.0704117 0.0870061 0.0112328 >>>>>>>>>> 0.0644318 0.17325 -1.41666e-19 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>>>>> [0] -0.11009 -0.0652389 0.322754 0.11009 0.0652389 0.288099 >>>>>>>>>> 3.12405e-14 -0.02916 1.21876 -0.275225 -0.0284819 -0.660001 9.50032e-17 >>>>>>>>>> 9.55728e-18 -0.727604 0.275225 0.0284819 -0.678455 0.055045 0.0279687 >>>>>>>>>> 0.0250605 0. 1.71863e-18 0.00972927 -0.055045 0.00119132 0.0294863 >>>>>>>>>> -1.47451e-17 -1.90582e-19 0.172172 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. >>>>>>>>>> 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 >>>>>>>>>> [0] -0.588378 0.32647 0.0275225 -0.637271 0.141058 -0.137613 >>>>>>>>>> -2.37741 -0.0867478 -0.275225 3.68138 0.0265907 3.13395e-14 1.1468 >>>>>>>>>> -0.107528 0.11009 1.18171 0.00886356 -3.13222e-14 -1.06248 -0.175069 >>>>>>>>>> 0.158254 0.00657075 -0.03 0. 0.152747 -0.00526446 0.0344031 -1.50368 >>>>>>>>>> -0.0983732 0.0825675 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> [0] 0.264666 -0.171351 -0.0207062 0.141058 -0.254263 0.0452109 >>>>>>>>>> 0.0332522 -0.592598 -0.0284819 0.0265907 1.20415 -0.0932626 -0.165724 >>>>>>>>>> 0.149643 0.0524184 0.00886356 0.360043 0.0419805 -0.131422 -0.326529 >>>>>>>>>> 0.0132913 -0.02 0.0229976 -0.00641026 -0.0152645 0.0405681 -0.00489246 >>>>>>>>>> -0.14202 -0.432665 0.000852265 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>>>>> [0] -0.0275225 -0.00845384 -0.114921 -0.137613 0.0452109 >>>>>>>>>> -0.252027 -0.275225 -0.0413024 -0.660001 3.13785e-14 -0.0932626 1.19349 >>>>>>>>>> 0.165135 0.0786276 0.288099 -3.12866e-14 0.0163395 0.234796 0.116971 >>>>>>>>>> 0.0128652 -0.322147 0. -0.00961538 0.00657075 0.0344031 -0.00168733 >>>>>>>>>> 0.0564359 0.123851 0.0012784 -0.430298 0. 0. 4.17288e-11 0. 0. 2.08644e-11 >>>>>>>>>> 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. >>>>>>>>>> 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>>>> [0] -2.39586 -0.206902 0.33027 -2.37741 0.011448 -0.33027 >>>>>>>>>> -2.44501 -0.417727 6.38053e-17 1.1468 -0.165724 0.165135 4.88671 0.435454 >>>>>>>>>> 0. 1.18146 -0.0565489 -0.165135 0.307839 0.0658628 -0.0412838 -0.00807774 >>>>>>>>>> 0.18 0. 0.303413 0.038569 0.0412838 -0.599871 0.115568 0. 4.17288e-11 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. >>>>>>>>>> [0] -0.210511 -0.657189 0.0418706 -0.0649437 -0.592598 -0.0418706 >>>>>>>>>> -0.417727 -0.829208 6.30468e-18 -0.107528 0.149643 0.0786276 0.435454 >>>>>>>>>> 1.64686 0. -0.0347447 0.270936 -0.0786276 0.0613138 0.111396 -0.0100415 >>>>>>>>>> 0.12 -0.0282721 0. 0.043118 0.0959058 0.0100415 0.175568 -0.167469 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> 8.34577e-11 0. 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>>>>> [0] 0.22018 0.0279137 -0.678455 -0.22018 -0.0279137 -0.660001 >>>>>>>>>> 4.70408e-17 7.53383e-18 -0.727604 0.11009 0.0524184 0.288099 0. 0. 1.4519 >>>>>>>>>> -0.11009 -0.0524184 0.322754 -0.0275225 -0.00669434 0.0931634 0. 0. >>>>>>>>>> -0.00807774 0.0275225 0.00669434 0.0887375 0. 0. -0.17052 0. 0. 4.17288e-11 >>>>>>>>>> 0. 0. 4.17288e-11 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. >>>>>>>>>> 0. 4.17288e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. >>>>>>>>>> 0. -1.04322e-11 >>>>>>>>>> [0] -0.621071 0.0500786 0.137613 -0.588378 0.344666 -0.0275225 >>>>>>>>>> -2.39586 -0.268706 0.275225 1.18171 0.00886356 -3.12954e-14 1.18146 >>>>>>>>>> -0.0347447 -0.11009 3.64748 0.0265907 3.12693e-14 0.152935 0.0174804 >>>>>>>>>> -0.0344031 0.00233276 -0.03 0. -1.0575 -0.0704425 -0.158254 -1.50311 >>>>>>>>>> -0.0437857 -0.0825675 2.08644e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> [0] 0.0500786 -0.197561 -0.0160508 0.24647 -0.171351 -0.00845384 >>>>>>>>>> -0.148706 -0.657189 0.0284819 0.00886356 0.360043 0.0163395 -0.0565489 >>>>>>>>>> 0.270936 -0.0524184 0.0265907 1.08549 0.0349425 0.00748035 0.0412255 >>>>>>>>>> -0.00239755 -0.02 0.00816465 0.00641026 -0.0540894 -0.309066 -0.00600133 >>>>>>>>>> -0.0601388 -0.430693 -0.000852265 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. >>>>>>>>>> [0] 0.137613 -0.0160508 -0.235826 0.0275225 -0.0207062 -0.114921 >>>>>>>>>> 0.275225 0.0413024 -0.678455 -3.13299e-14 0.0419805 0.234796 -0.165135 >>>>>>>>>> -0.0786276 0.322754 3.12753e-14 0.0349425 1.15959 -0.0344031 -0.00560268 >>>>>>>>>> 0.0566238 0. 0.00961538 0.00233276 -0.116971 -0.00557519 -0.317157 >>>>>>>>>> -0.123851 -0.0012784 -0.429734 0. 0. 2.08644e-11 0. 0. 4.17288e-11 0. 0. >>>>>>>>>> 4.17288e-11 0. 0. 4.17288e-11 0. 0. 4.17288e-11 0. 0. 8.34577e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 >>>>>>>>>> [0] -0.180869 -0.12748 0.0344031 -0.140937 0.00616654 -0.00688063 >>>>>>>>>> 0.283843 0.03402 0.055045 -1.06248 -0.131422 0.116971 0.307839 0.0613138 >>>>>>>>>> -0.0275225 0.152935 0.00748035 -0.0344031 0.479756 0.112441 -0.103209 >>>>>>>>>> 0.00698363 0.03 0. -0.14792 -0.0238335 -0.00688063 0.300855 0.0313138 >>>>>>>>>> -0.0275225 -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. 1.56483e-11 0. 0. >>>>>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>>>>> [0] -0.0974804 -0.138996 0.0312437 -0.00838243 -0.0212853 >>>>>>>>>> -0.0117288 0.0476669 0.0715157 0.0279687 -0.175069 -0.326529 0.0128652 >>>>>>>>>> 0.0658628 0.111396 -0.00669434 0.0174804 0.0412255 -0.00560268 0.112441 >>>>>>>>>> 0.197005 -0.0360388 0.02 0.0244427 -0.00641026 -0.0283824 -0.045728 >>>>>>>>>> -0.00531859 0.0458628 0.0869535 -0.000284088 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>>>>> 2.60805e-12 0. >>>>>>>>>> [0] 0.0344031 0.0408591 -0.0845583 0.00688063 -0.0115868 >>>>>>>>>> -0.0225723 0.0137613 0.0179272 0.0250605 0.158254 0.0132913 -0.322147 >>>>>>>>>> -0.0412838 -0.0100415 0.0931634 -0.0344031 -0.00239755 0.0566238 -0.103209 >>>>>>>>>> -0.0360388 0.190822 0. -0.00961538 0.00698363 0.00688063 -0.00197142 >>>>>>>>>> -0.029556 -0.0412838 -0.000426133 0.0861797 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>>>>> 2.60805e-12 >>>>>>>>>> [0] -0.0302673 -0.06 0. -0.0175533 -0.06 0. 0.00972927 0.04 0. >>>>>>>>>> 0.00657075 -0.02 0. -0.00807774 0.12 0. 0.00233276 -0.02 0. 0.00698363 0.02 >>>>>>>>>> 0. 0.0279492 0. 0. 0.00274564 0.02 0. -0.000412882 -0.04 0. -1.04322e-11 0. >>>>>>>>>> 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. >>>>>>>>>> 0. -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>>>> 2.60805e-12 0. 0. >>>>>>>>>> [0] -0.09 -0.105935 0.0288462 -0.09 -0.0614365 -0.0288462 0.06 >>>>>>>>>> 0.0340524 3.0201e-18 -0.03 0.0229976 -0.00961538 0.18 -0.0282721 0. -0.03 >>>>>>>>>> 0.00816465 0.00961538 0.03 0.0244427 -0.00961538 0. 0.097822 0. 0.03 >>>>>>>>>> 0.00960973 0.00961538 -0.06 -0.00144509 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>>>> 2.60805e-12 0. >>>>>>>>>> [0] 0. 0.0192308 -0.0302673 0. -0.0192308 -0.0175533 0. >>>>>>>>>> 1.8315e-18 0.00972927 0. -0.00641026 0.00657075 0. 0. -0.00807774 0. >>>>>>>>>> 0.00641026 0.00233276 0. -0.00641026 0.00698363 0. 0. 0.0279492 0. >>>>>>>>>> 0.00641026 0.00274564 0. 0. -0.000412882 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>>>> 2.60805e-12 >>>>>>>>>> [0] -0.145175 0.00161757 0.00688063 -0.16373 -0.104736 -0.0344031 >>>>>>>>>> 0.288268 0.0704117 -0.055045 0.152747 -0.0152645 0.0344031 0.303413 >>>>>>>>>> 0.043118 0.0275225 -1.0575 -0.0540894 -0.116971 -0.14792 -0.0283824 >>>>>>>>>> 0.00688063 0.00274564 0.03 0. 0.466478 0.0442066 0.103209 0.300667 0.013118 >>>>>>>>>> 0.0275225 -1.56483e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>>>> [0] -0.00383346 -0.0361182 0.00443884 -0.0747355 -0.0790071 >>>>>>>>>> -0.0239537 0.0567649 0.0870061 0.00119132 -0.00526446 0.0405681 -0.00168733 >>>>>>>>>> 0.038569 0.0959058 0.00669434 -0.0704425 -0.309066 -0.00557519 -0.0238335 >>>>>>>>>> -0.045728 -0.00197142 0.02 0.00960973 0.00641026 0.0442066 0.150534 >>>>>>>>>> 0.0141688 0.018569 0.0862961 0.000284088 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>>>> 0. 0. -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>>>> 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 0. >>>>>>>>>> [0] -0.00688063 0.0042968 -0.0268103 -0.0344031 -0.0335691 >>>>>>>>>> -0.0674185 -0.0137613 0.0112328 0.0294863 0.0344031 -0.00489246 0.0564359 >>>>>>>>>> 0.0412838 0.0100415 0.0887375 -0.158254 -0.00600133 -0.317157 -0.00688063 >>>>>>>>>> -0.00531859 -0.029556 0. 0.00961538 0.00274564 0.103209 0.0141688 0.177545 >>>>>>>>>> 0.0412838 0.000426133 0.0859919 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. 0. 2.60805e-12 >>>>>>>>>> [0] 0.300442 -0.0141372 -0.0412838 0.300255 -0.041431 0.0412838 >>>>>>>>>> 0.601523 0.0644318 -1.72388e-17 -1.50368 -0.14202 0.123851 -0.599871 >>>>>>>>>> 0.175568 0. -1.50311 -0.0601388 -0.123851 0.300855 0.0458628 -0.0412838 >>>>>>>>>> -0.000412882 -0.06 0. 0.300667 0.018569 0.0412838 1.80333 0.0132953 0. >>>>>>>>>> -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 0. 0. >>>>>>>>>> 2.60805e-12 0. 0. 1.56483e-11 0. 0. >>>>>>>>>> [0] -0.00868618 0.0855084 -0.000426133 -0.026882 0.084851 >>>>>>>>>> 0.000426133 0.0444318 0.17325 -1.17738e-19 -0.0983732 -0.432665 0.0012784 >>>>>>>>>> 0.115568 -0.167469 0. -0.0437857 -0.430693 -0.0012784 0.0313138 0.0869535 >>>>>>>>>> -0.000426133 -0.04 -0.00144509 0. 0.013118 0.0862961 0.000426133 0.0132953 >>>>>>>>>> 0.515413 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. >>>>>>>>>> -1.04322e-11 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. >>>>>>>>>> 2.60805e-12 0. 0. 2.60805e-12 0. 0. 1.56483e-11 0. >>>>>>>>>> [0] -0.0275225 -0.000284088 0.0857668 0.0275225 0.000284088 >>>>>>>>>> 0.085579 -1.41488e-17 -8.91502e-20 0.172172 0.0825675 0.000852265 -0.430298 >>>>>>>>>> 0. 0. -0.17052 -0.0825675 -0.000852265 -0.429734 -0.0275225 -0.000284088 >>>>>>>>>> 0.0861797 0. 0. -0.000412882 0.0275225 0.000284088 0.0859919 0. 0. 0.515276 >>>>>>>>>> 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.56483e-11 0. 0. -1.04322e-11 >>>>>>>>>> 0. 0. -1.04322e-11 0. 0. -1.04322e-11 0. 0. 2.60805e-12 0. 0. 2.60805e-12 >>>>>>>>>> 0. 0. 2.60805e-12 0. 0. 1.56483e-11 >>>>>>>>>> [0] -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. >>>>>>>>>> [0] 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. >>>>>>>>>> [0] 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. >>>>>>>>>> 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. 1.32894e-06 0. >>>>>>>>>> 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 5.31578e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 1.32894e-09 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -9.96708e-10 >>>>>>>>>> [0] -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. >>>>>>>>>> [0] 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. >>>>>>>>>> -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. >>>>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. >>>>>>>>>> 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. >>>>>>>>>> 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 >>>>>>>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. >>>>>>>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. >>>>>>>>>> -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. >>>>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. >>>>>>>>>> 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. >>>>>>>>>> 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. 1.32894e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 >>>>>>>>>> [0] -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. >>>>>>>>>> [0] 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>>> -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. >>>>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. >>>>>>>>>> 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. 1.32894e-06 0. >>>>>>>>>> 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 >>>>>>>>>> [0] -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. >>>>>>>>>> [0] 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. 0. >>>>>>>>>> -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. >>>>>>>>>> [0] 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -2.65789e-06 0. >>>>>>>>>> 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. -5.31578e-06 0. 0. 1.99342e-06 0. >>>>>>>>>> 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 1.32894e-09 0. 0. 2.65789e-09 0. 0. 5.31578e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 >>>>>>>>>> [0] -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. >>>>>>>>>> [0] 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. >>>>>>>>>> -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. >>>>>>>>>> [0] 0. 0. -2.65789e-06 0. 0. -5.31578e-06 0. 0. -5.31578e-06 0. >>>>>>>>>> 0. -5.31578e-06 0. 0. -5.31578e-06 0. 0. -1.06316e-05 0. 0. 1.99342e-06 0. >>>>>>>>>> 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-09 0. 0. >>>>>>>>>> 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. 2.65789e-09 0. 0. >>>>>>>>>> 5.31578e-09 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -6.64472e-10 >>>>>>>>>> [0] 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>>> 1.66118e-10 0. 0. >>>>>>>>>> [0] 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>>> 1.66118e-10 0. >>>>>>>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. -1.99342e-06 0. 0. >>>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>>> 1.66118e-10 >>>>>>>>>> [0] 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>>> 1.66118e-10 0. 0. >>>>>>>>>> [0] 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>>> 1.66118e-10 0. >>>>>>>>>> [0] 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.99342e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>>> -1.99342e-06 0. 0. -3.32236e-07 0. 0. -3.32236e-07 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>>> 1.66118e-10 >>>>>>>>>> [0] 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>>>>> 1.66118e-10 0. 0. >>>>>>>>>> [0] 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>>>>> 1.66118e-10 0. >>>>>>>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. >>>>>>>>>> 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>>> -3.32236e-07 0. 0. -1.99342e-06 0. 0. -3.32236e-07 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. -6.64472e-10 0. 0. -9.96708e-10 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 9.96708e-10 0. 0. >>>>>>>>>> 1.66118e-10 >>>>>>>>>> [0] 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>>> 9.96708e-10 0. 0. >>>>>>>>>> [0] 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>>> 9.96708e-10 0. >>>>>>>>>> [0] 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. 1.99342e-06 0. 0. >>>>>>>>>> 1.32894e-06 0. 0. 1.32894e-06 0. 0. 1.32894e-06 0. 0. -3.32236e-07 0. 0. >>>>>>>>>> -3.32236e-07 0. 0. -3.32236e-07 0. 0. -1.99342e-06 0. 0. -9.96708e-10 0. 0. >>>>>>>>>> -9.96708e-10 0. 0. -9.96708e-10 0. 0. -6.64472e-10 0. 0. -6.64472e-10 0. 0. >>>>>>>>>> -6.64472e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. 1.66118e-10 0. 0. >>>>>>>>>> 9.96708e-10 >>>>>>>>>> [0]PETSC ERROR: #3 DMPlexMatSetClosure() line 5480 in >>>>>>>>>> /home/hartig/petsc/src/dm/impls/plex/plex.c >>>>>>>>>> [0]PETSC ERROR: #4 DMPlexComputeJacobian_Internal() line 2301 in >>>>>>>>>> /home/hartig/petsc/src/snes/utils/dmplexsnes.c >>>>>>>>>> [0]PETSC ERROR: #5 DMPlexTSComputeIJacobianFEM() line 233 in >>>>>>>>>> /home/hartig/petsc/src/ts/utils/dmplexts.c >>>>>>>>>> [0]PETSC ERROR: #6 TSComputeIJacobian_DMLocal() line 131 in >>>>>>>>>> /home/hartig/petsc/src/ts/utils/dmlocalts.c >>>>>>>>>> [0]PETSC ERROR: #7 TSComputeIJacobian() line 882 in >>>>>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>>>>> [0]PETSC ERROR: #8 SNESTSFormJacobian_Theta() line 515 in >>>>>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>>>>> [0]PETSC ERROR: #9 SNESTSFormJacobian() line 5044 in >>>>>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>>>>> [0]PETSC ERROR: #10 SNESComputeJacobian() line 2276 in >>>>>>>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>>>>>>> [0]PETSC ERROR: #11 SNESSolve_NEWTONLS() line 222 in >>>>>>>>>> /home/hartig/petsc/src/snes/impls/ls/ls.c >>>>>>>>>> [0]PETSC ERROR: #12 SNESSolve() line 3967 in >>>>>>>>>> /home/hartig/petsc/src/snes/interface/snes.c >>>>>>>>>> [0]PETSC ERROR: #13 TS_SNESSolve() line 171 in >>>>>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>>>>> [0]PETSC ERROR: #14 TSStep_Theta() line 211 in >>>>>>>>>> /home/hartig/petsc/src/ts/impls/implicit/theta/theta.c >>>>>>>>>> [0]PETSC ERROR: #15 TSStep() line 3809 in >>>>>>>>>> /home/hartig/petsc/src/ts/interface/ts.c >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> What most experimenters take for granted before they begin their >>>>>>> experiments is infinitely more interesting than any results to which their >>>>>>> experiments lead. >>>>>>> -- Norbert Wiener >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rajeev.das at cnl.ca Fri Mar 17 16:16:12 2017 From: rajeev.das at cnl.ca (Das, Rajeev) Date: Fri, 17 Mar 2017 21:16:12 +0000 Subject: [petsc-users] Doubts - Petsc involving mumps in a GPU environment. Message-ID: <68ba70d169ca43d38939b8ac06e46e1c@nls404.corp.cnl.ca> UNRESTRICTED / ILLIMIT?E Hi, I configured Petsc using the following flags --download-mpich=yes --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-shared-libraries=0 --with-debugging=0 --download-mumps=yes --download-scalapack=yes --download-blacs --download-parmetis=yes --download-metis=yes --download-fblaslapack=yes --with-c2html=0 --with-cuda=1 --with-cusp=1 --with-thrust=1 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 -march=native -mtune=native The matrix that I am using to run a program is of type MATAIJ. The command prompts provided are -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps. When I do a nvidia-smi , it is indicating that the program is using a GPU . My question is with the above configuration and options, is Petsc really using the direct solver in GPU to solve or it is just making copies and that is why I am finding the GPU usage. In addition, I would also like to know that if the direct solver is not using a GPU in the above case, then how do I use a direct solver in a GPU (using the command prompts, if possible). Regards, Rajeev Das. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Mar 17 16:39:13 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 17 Mar 2017 16:39:13 -0500 Subject: [petsc-users] Doubts - Petsc involving mumps in a GPU environment. In-Reply-To: <68ba70d169ca43d38939b8ac06e46e1c@nls404.corp.cnl.ca> References: <68ba70d169ca43d38939b8ac06e46e1c@nls404.corp.cnl.ca> Message-ID: <7D320452-BB4E-4783-AC6B-CA390BBA4364@mcs.anl.gov> I did a quick google search and could find no indication that MUMPS can run on GPUs. However SuperLU_Dist does run on GPUs; You can install it with the additional options --download-superlu_dist -download-superlu_dist-gpu --with-cuda --with-openmp --download-parmetis=yes --download-metis=yes Note that we have no experience with using SuperLU_Dist with GPUs and OpenMP so you'll need to direct questions about performance .... directly with Sherry Li. Barry BTW: most systems have blas and lapack installed so you should not use --download-fblaslapack=yes > On Mar 17, 2017, at 4:16 PM, Das, Rajeev wrote: > > UNRESTRICTED / ILLIMIT?E > > Hi, > > I configured Petsc using the following flags > > --download-mpich=yes --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-shared-libraries=0 --with-debugging=0 --download-mumps=yes --download-scalapack=yes --download-blacs --download-parmetis=yes --download-metis=yes --download-fblaslapack=yes --with-c2html=0 --with-cuda=1 --with-cusp=1 --with-thrust=1 COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 -march=native -mtune=native > > The matrix that I am using to run a program is of type MATAIJ. The command prompts provided are -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps. When I do a nvidia-smi , it is indicating that the program is using a GPU . > > My question is with the above configuration and options, is Petsc really using the direct solver in GPU to solve or it is just making copies and that is why I am finding the GPU usage. > > In addition, I would also like to know that if the direct solver is not using a GPU in the above case, then how do I use a direct solver in a GPU (using the command prompts, if possible). > > Regards, > > Rajeev Das. From andreas at ices.utexas.edu Sat Mar 18 00:05:08 2017 From: andreas at ices.utexas.edu (Andreas Mang) Date: Sat, 18 Mar 2017 00:05:08 -0500 Subject: [petsc-users] single precision vs double: strange behavior Message-ID: <27D2E1D7-9404-47D0-8381-FFAA09D3BB8E@ices.utexas.edu> Hey guys: I was trying to run my code with single precision which resulted in strange behavior. The code essentially already breaks within the test case creation (computing some sinusoidal functions). I discovered that. e.g., norms and max and min values do not make sense. I created a simple test example that demonstrates strange behavior on my machine. I am trying to find the min and max values of a PETSc vector with numbers ranging from 0 to 15. I removed all non-essential compiler flags and dependencies. If I compile PETSc with double precision I get 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 min 0 at 0 max 15 at 15 If I compile with single precision I get 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 min 15 at 8 max 14 at 7 The difference is that I add "--with-precision=single? to the configure command of PETSc. I use intel/17.0 with mpich2/3.1.4. The code can be found below. I also provide the configure options and the command line for compiling the code. Thanks /Andreas mpicxx -O3 -I./include -isystem/h1/andreas/code/develop/cold/external/libs/petsc_single/build/include -isystem/h1/andreas/code/develop/cold/external/libs/petsc_single/build/cxx_opt/include -c apps/checkcoldreg.cpp -o obj/checkcoldreg.o Configure Options: --with-cc=mpicc --CFLAGS= --with-cxx=mpicxx --CXXFLAGS= --download-f2cblaslapack --with-debugging=1 --with-64-bit-indices --with-shared-libraries=0 --with-x=0 --with-fc=0 --with-precision=single #include #include "petsc.h" int main(int argc, char **argv) { PetscErrorCode ierr; PetscScalar maxval, minval; PetscInt posmin, posmax; Vec x; PetscScalar* p_x = NULL; ierr = PetscInitialize(0, reinterpret_cast(NULL), reinterpret_cast(NULL), reinterpret_cast(NULL)); CHKERRQ(ierr); PetscInt n = 16; ierr = VecCreate(PETSC_COMM_WORLD, &x); CHKERRQ(ierr); ierr = VecSetSizes(x, n, n); CHKERRQ(ierr); ierr = VecSetFromOptions(x); CHKERRQ(ierr); ierr = VecGetArray(x, &p_x); CHKERRQ(ierr); for (PetscInt i = 0; i < n; ++i) { p_x[i] = static_cast(i); std::cout << p_x[i] << " "; } ierr = VecRestoreArray(x, &p_x); CHKERRQ(ierr); std::cout << std::endl; ierr = VecMin(x, &posmin, &minval); CHKERRQ(ierr); ierr = VecMax(x, &posmax, &maxval); CHKERRQ(ierr); std::cout << "min " << minval << " at " << posmin << std::endl; std::cout << "max " << maxval << " at " << posmax << std::endl; ierr = VecDestroy(&x); CHKERRQ(ierr); ierr = PetscFinalize(); CHKERRQ(ierr); return 0; } From cpraveen at gmail.com Sat Mar 18 06:43:09 2017 From: cpraveen at gmail.com (Praveen C) Date: Sat, 18 Mar 2017 17:13:09 +0530 Subject: [petsc-users] Fortran application context, passing a module In-Reply-To: References: Message-ID: Thanks Barry. I have to pass three derived types. I now put my 3 types into another type and pass that as the application context. Best praveen On Fri, Mar 17, 2017 at 10:45 AM, Barry Smith wrote: > > It looks to me that you are trying to pass a Fortran 90 derived type as > one entry in an array of addresses as an application context (perhaps from > some PETSc example). This will not work as written because Fortran doesn't > know that PetscFortranAddr is an array of addresses. > > The conventional way to do this is to pass a derived type directly as > the context. So you would just have > > type(mgrid) mygrid > > then when you call SNESSetComputeFunction() or whatever routine you > are calling that requires a context > > SNESSetComputeFunction(snes,v,computefunction,mygrid,ierr) > > and inside your computefunction the ctx argument is declared as > type(grid) mygrid > > See, for example, src/snes/examples/tutorials/ex5f90.F90 > > > Barry > > > type(mgrid) > > > > On Mar 17, 2017, at 12:02 AM, Praveen C wrote: > > > > Dear all > > > > I want to pass my own module as an application context in fortran. So I > am trying something like this > > > > type(mgrid),target :: g > > PetscFortranAddr :: ctx(6) > > ctx(1) => g > > > > But this gives an error > > Error: Non-POINTER in pointer association context (pointer assignment) > at (1) > > > > Could you tell me how I can make this work ? > > > > Thanks > > praveen > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpraveen at gmail.com Sat Mar 18 08:21:20 2017 From: cpraveen at gmail.com (Praveen C) Date: Sat, 18 Mar 2017 18:51:20 +0530 Subject: [petsc-users] Segmentation fault due to TSDestroy Message-ID: Dear all I get a segmentation fault when I call TSDestroy. Without TSDestroy the code runs fine. I have included portion of my code below. subroutine runts(ctx) use userdata use comdata use mtsdata implicit none #include type(tsdata) :: ctx ! Local variables integer,parameter :: h = 100 ! File id for history file TS :: ts Vec :: u PetscErrorCode :: ierr external :: RHSFunction, Monitor call VecDuplicate(ctx%p%v_res, u, ierr); CHKERRQ(ierr) ! Copy initial condition into u call VecCopy(ctx%p%v_u, u, ierr); CHKERRQ(ierr) call TSCreate(PETSC_COMM_WORLD, ts, ierr); CHKERRQ(ierr) call TSSetProblemType(ts, TS_NONLINEAR, ierr); CHKERRQ(ierr) call TSSetRHSFunction(ts, PETSC_NULL_OBJECT, RHSFunction, ctx, ierr); CHKERRQ(ierr) call TSSetInitialTimeStep(ts, 0.0, dtg, ierr); CHKERRQ(ierr) call TSSetType(ts, TSRK, ierr); CHKERRQ(ierr); call TSSetDuration(ts, itmax, tfinal, ierr); CHKERRQ(ierr); call TSSetExactFinalTime(ts, TS_EXACTFINALTIME_MATCHSTEP, ierr); CHKERRQ(ierr); call TSMonitorSet(ts, Monitor, ctx, PETSC_NULL_OBJECT, ierr); CHKERRQ(ierr) call TSSetSolution(ts, u, ierr); CHKERRQ(ierr) call TSSetFromOptions(ts, ierr); CHKERRQ(ierr) call TSSetUp(ts, ierr); CHKERRQ(ierr) call TSSolve(ts, u, ierr); CHKERRQ(ierr) call VecCopy(u, ctx%p%v_u, ierr); CHKERRQ(ierr) call VecDestroy(u, ierr); CHKERRQ(ierr) call TSDestroy(ts, ierr); CHKERRQ(ierr) end subroutine runts Thanks praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sat Mar 18 09:45:43 2017 From: jed at jedbrown.org (Jed Brown) Date: Sat, 18 Mar 2017 08:45:43 -0600 Subject: [petsc-users] single precision vs double: strange behavior In-Reply-To: <27D2E1D7-9404-47D0-8381-FFAA09D3BB8E@ices.utexas.edu> References: <27D2E1D7-9404-47D0-8381-FFAA09D3BB8E@ices.utexas.edu> Message-ID: <87bmsyiqy0.fsf@jedbrown.org> I can't reproduce this result. It looks like something that could happen by mixing up the headers used to compile with the library used to link/execute. Andreas Mang writes: > Hey guys: > > I was trying to run my code with single precision which resulted in strange behavior. The code essentially already breaks within the test case creation (computing some sinusoidal functions). I discovered that. e.g., norms and max and min values do not make sense. I created a simple test example that demonstrates strange behavior on my machine. I am trying to find the min and max values of a PETSc vector with numbers ranging from 0 to 15. I removed all non-essential compiler flags and dependencies. > > If I compile PETSc with double precision I get > > 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 > min 0 at 0 > max 15 at 15 > > If I compile with single precision I get > > 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 > min 15 at 8 > max 14 at 7 > > The difference is that I add "--with-precision=single? to the configure command of PETSc. I use intel/17.0 with mpich2/3.1.4. The code can be found below. I also provide the configure options and the command line for compiling the code. > > Thanks /Andreas > > > mpicxx -O3 -I./include -isystem/h1/andreas/code/develop/cold/external/libs/petsc_single/build/include -isystem/h1/andreas/code/develop/cold/external/libs/petsc_single/build/cxx_opt/include -c apps/checkcoldreg.cpp -o obj/checkcoldreg.o > > Configure Options: --with-cc=mpicc --CFLAGS= --with-cxx=mpicxx --CXXFLAGS= --download-f2cblaslapack --with-debugging=1 --with-64-bit-indices --with-shared-libraries=0 --with-x=0 --with-fc=0 --with-precision=single > > #include > #include "petsc.h" > > int main(int argc, char **argv) { > PetscErrorCode ierr; > PetscScalar maxval, minval; > PetscInt posmin, posmax; > Vec x; PetscScalar* p_x = NULL; > > ierr = PetscInitialize(0, reinterpret_cast(NULL), > reinterpret_cast(NULL), > reinterpret_cast(NULL)); CHKERRQ(ierr); > PetscInt n = 16; > > ierr = VecCreate(PETSC_COMM_WORLD, &x); CHKERRQ(ierr); > ierr = VecSetSizes(x, n, n); CHKERRQ(ierr); > ierr = VecSetFromOptions(x); CHKERRQ(ierr); > > ierr = VecGetArray(x, &p_x); CHKERRQ(ierr); > for (PetscInt i = 0; i < n; ++i) { > p_x[i] = static_cast(i); > std::cout << p_x[i] << " "; > } > ierr = VecRestoreArray(x, &p_x); CHKERRQ(ierr); > > std::cout << std::endl; > > ierr = VecMin(x, &posmin, &minval); CHKERRQ(ierr); > ierr = VecMax(x, &posmax, &maxval); CHKERRQ(ierr); > > std::cout << "min " << minval << " at " << posmin << std::endl; > std::cout << "max " << maxval << " at " << posmax << std::endl; > > ierr = VecDestroy(&x); CHKERRQ(ierr); > ierr = PetscFinalize(); CHKERRQ(ierr); > return 0; > } -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From andreas at ices.utexas.edu Sat Mar 18 10:23:30 2017 From: andreas at ices.utexas.edu (Andreas Mang) Date: Sat, 18 Mar 2017 10:23:30 -0500 Subject: [petsc-users] single precision vs double: strange behavior In-Reply-To: <87bmsyiqy0.fsf@jedbrown.org> References: <27D2E1D7-9404-47D0-8381-FFAA09D3BB8E@ices.utexas.edu> <87bmsyiqy0.fsf@jedbrown.org> Message-ID: <1B470217-CED9-4E4D-BF77-0A6560AA2AFA@ices.utexas.edu> Jed, you are correct. I?m an idiot. I linked against the wrong library. Next time I?ll sleep before I send you an email. Sorry for wasting your time. Have a great weekend. /Andreas > On Mar 18, 2017, at 9:45 AM, Jed Brown wrote: > > I can't reproduce this result. It looks like something that could > happen by mixing up the headers used to compile with the library used to > link/execute. > > Andreas Mang writes: > >> Hey guys: >> >> I was trying to run my code with single precision which resulted in strange behavior. The code essentially already breaks within the test case creation (computing some sinusoidal functions). I discovered that. e.g., norms and max and min values do not make sense. I created a simple test example that demonstrates strange behavior on my machine. I am trying to find the min and max values of a PETSc vector with numbers ranging from 0 to 15. I removed all non-essential compiler flags and dependencies. >> >> If I compile PETSc with double precision I get >> >> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 >> min 0 at 0 >> max 15 at 15 >> >> If I compile with single precision I get >> >> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 >> min 15 at 8 >> max 14 at 7 >> >> The difference is that I add "--with-precision=single? to the configure command of PETSc. I use intel/17.0 with mpich2/3.1.4. The code can be found below. I also provide the configure options and the command line for compiling the code. >> >> Thanks /Andreas >> >> >> mpicxx -O3 -I./include -isystem/h1/andreas/code/develop/cold/external/libs/petsc_single/build/include -isystem/h1/andreas/code/develop/cold/external/libs/petsc_single/build/cxx_opt/include -c apps/checkcoldreg.cpp -o obj/checkcoldreg.o >> >> Configure Options: --with-cc=mpicc --CFLAGS= --with-cxx=mpicxx --CXXFLAGS= --download-f2cblaslapack --with-debugging=1 --with-64-bit-indices --with-shared-libraries=0 --with-x=0 --with-fc=0 --with-precision=single >> >> #include >> #include "petsc.h" >> >> int main(int argc, char **argv) { >> PetscErrorCode ierr; >> PetscScalar maxval, minval; >> PetscInt posmin, posmax; >> Vec x; PetscScalar* p_x = NULL; >> >> ierr = PetscInitialize(0, reinterpret_cast(NULL), >> reinterpret_cast(NULL), >> reinterpret_cast(NULL)); CHKERRQ(ierr); >> PetscInt n = 16; >> >> ierr = VecCreate(PETSC_COMM_WORLD, &x); CHKERRQ(ierr); >> ierr = VecSetSizes(x, n, n); CHKERRQ(ierr); >> ierr = VecSetFromOptions(x); CHKERRQ(ierr); >> >> ierr = VecGetArray(x, &p_x); CHKERRQ(ierr); >> for (PetscInt i = 0; i < n; ++i) { >> p_x[i] = static_cast(i); >> std::cout << p_x[i] << " "; >> } >> ierr = VecRestoreArray(x, &p_x); CHKERRQ(ierr); >> >> std::cout << std::endl; >> >> ierr = VecMin(x, &posmin, &minval); CHKERRQ(ierr); >> ierr = VecMax(x, &posmax, &maxval); CHKERRQ(ierr); >> >> std::cout << "min " << minval << " at " << posmin << std::endl; >> std::cout << "max " << maxval << " at " << posmax << std::endl; >> >> ierr = VecDestroy(&x); CHKERRQ(ierr); >> ierr = PetscFinalize(); CHKERRQ(ierr); >> return 0; >> } From balay at mcs.anl.gov Sat Mar 18 11:00:19 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 18 Mar 2017 11:00:19 -0500 Subject: [petsc-users] Segmentation fault due to TSDestroy In-Reply-To: References: Message-ID: Perhaps there is some memory corruption - you can try runnng the code with valgrind. Satish On Sat, 18 Mar 2017, Praveen C wrote: > Dear all > > I get a segmentation fault when I call TSDestroy. Without TSDestroy the > code runs fine. I have included portion of my code below. > > subroutine runts(ctx) > use userdata > use comdata > use mtsdata > implicit none > #include > type(tsdata) :: ctx > ! Local variables > integer,parameter :: h = 100 ! File id for history file > TS :: ts > Vec :: u > PetscErrorCode :: ierr > external :: RHSFunction, Monitor > > call VecDuplicate(ctx%p%v_res, u, ierr); CHKERRQ(ierr) > > ! Copy initial condition into u > call VecCopy(ctx%p%v_u, u, ierr); CHKERRQ(ierr) > > call TSCreate(PETSC_COMM_WORLD, ts, ierr); CHKERRQ(ierr) > call TSSetProblemType(ts, TS_NONLINEAR, ierr); CHKERRQ(ierr) > call TSSetRHSFunction(ts, PETSC_NULL_OBJECT, RHSFunction, ctx, ierr); > CHKERRQ(ierr) > call TSSetInitialTimeStep(ts, 0.0, dtg, ierr); CHKERRQ(ierr) > call TSSetType(ts, TSRK, ierr); CHKERRQ(ierr); > call TSSetDuration(ts, itmax, tfinal, ierr); CHKERRQ(ierr); > call TSSetExactFinalTime(ts, TS_EXACTFINALTIME_MATCHSTEP, ierr); > CHKERRQ(ierr); > call TSMonitorSet(ts, Monitor, ctx, PETSC_NULL_OBJECT, ierr); > CHKERRQ(ierr) > call TSSetSolution(ts, u, ierr); CHKERRQ(ierr) > call TSSetFromOptions(ts, ierr); CHKERRQ(ierr) > call TSSetUp(ts, ierr); CHKERRQ(ierr) > > call TSSolve(ts, u, ierr); CHKERRQ(ierr) > > call VecCopy(u, ctx%p%v_u, ierr); CHKERRQ(ierr) > call VecDestroy(u, ierr); CHKERRQ(ierr) > call TSDestroy(ts, ierr); CHKERRQ(ierr) > > end subroutine runts > > Thanks > praveen > From bsmith at mcs.anl.gov Sat Mar 18 12:26:50 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 18 Mar 2017 12:26:50 -0500 Subject: [petsc-users] single precision vs double: strange behavior In-Reply-To: <1B470217-CED9-4E4D-BF77-0A6560AA2AFA@ices.utexas.edu> References: <27D2E1D7-9404-47D0-8381-FFAA09D3BB8E@ices.utexas.edu> <87bmsyiqy0.fsf@jedbrown.org> <1B470217-CED9-4E4D-BF77-0A6560AA2AFA@ices.utexas.edu> Message-ID: Note that PetscScalar maxval, minval; should really be PetscReal maxval, minval; for it to work if compiled for complex. > On Mar 18, 2017, at 10:23 AM, Andreas Mang wrote: > > Jed, you are correct. I?m an idiot. I linked against the wrong library. Next time I?ll sleep before I send you an email. Sorry for wasting your time. Have a great weekend. /Andreas > >> On Mar 18, 2017, at 9:45 AM, Jed Brown wrote: >> >> I can't reproduce this result. It looks like something that could >> happen by mixing up the headers used to compile with the library used to >> link/execute. >> >> Andreas Mang writes: >> >>> Hey guys: >>> >>> I was trying to run my code with single precision which resulted in strange behavior. The code essentially already breaks within the test case creation (computing some sinusoidal functions). I discovered that. e.g., norms and max and min values do not make sense. I created a simple test example that demonstrates strange behavior on my machine. I am trying to find the min and max values of a PETSc vector with numbers ranging from 0 to 15. I removed all non-essential compiler flags and dependencies. >>> >>> If I compile PETSc with double precision I get >>> >>> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 >>> min 0 at 0 >>> max 15 at 15 >>> >>> If I compile with single precision I get >>> >>> 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 >>> min 15 at 8 >>> max 14 at 7 >>> >>> The difference is that I add "--with-precision=single? to the configure command of PETSc. I use intel/17.0 with mpich2/3.1.4. The code can be found below. I also provide the configure options and the command line for compiling the code. >>> >>> Thanks /Andreas >>> >>> >>> mpicxx -O3 -I./include -isystem/h1/andreas/code/develop/cold/external/libs/petsc_single/build/include -isystem/h1/andreas/code/develop/cold/external/libs/petsc_single/build/cxx_opt/include -c apps/checkcoldreg.cpp -o obj/checkcoldreg.o >>> >>> Configure Options: --with-cc=mpicc --CFLAGS= --with-cxx=mpicxx --CXXFLAGS= --download-f2cblaslapack --with-debugging=1 --with-64-bit-indices --with-shared-libraries=0 --with-x=0 --with-fc=0 --with-precision=single >>> >>> #include >>> #include "petsc.h" >>> >>> int main(int argc, char **argv) { >>> PetscErrorCode ierr; >>> PetscScalar maxval, minval; >>> PetscInt posmin, posmax; >>> Vec x; PetscScalar* p_x = NULL; >>> >>> ierr = PetscInitialize(0, reinterpret_cast(NULL), >>> reinterpret_cast(NULL), >>> reinterpret_cast(NULL)); CHKERRQ(ierr); >>> PetscInt n = 16; >>> >>> ierr = VecCreate(PETSC_COMM_WORLD, &x); CHKERRQ(ierr); >>> ierr = VecSetSizes(x, n, n); CHKERRQ(ierr); >>> ierr = VecSetFromOptions(x); CHKERRQ(ierr); >>> >>> ierr = VecGetArray(x, &p_x); CHKERRQ(ierr); >>> for (PetscInt i = 0; i < n; ++i) { >>> p_x[i] = static_cast(i); >>> std::cout << p_x[i] << " "; >>> } >>> ierr = VecRestoreArray(x, &p_x); CHKERRQ(ierr); >>> >>> std::cout << std::endl; >>> >>> ierr = VecMin(x, &posmin, &minval); CHKERRQ(ierr); >>> ierr = VecMax(x, &posmax, &maxval); CHKERRQ(ierr); >>> >>> std::cout << "min " << minval << " at " << posmin << std::endl; >>> std::cout << "max " << maxval << " at " << posmax << std::endl; >>> >>> ierr = VecDestroy(&x); CHKERRQ(ierr); >>> ierr = PetscFinalize(); CHKERRQ(ierr); >>> return 0; >>> } > From patrick.sanan at gmail.com Sat Mar 18 16:09:29 2017 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Sat, 18 Mar 2017 22:09:29 +0100 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: <6475B307-B5BC-493D-BF9F-FDA19C2BB932@mcs.anl.gov> References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> <6475B307-B5BC-493D-BF9F-FDA19C2BB932@mcs.anl.gov> Message-ID: There is indeed a way to interpret preconditioned CG as left preconditioning: Consider Ax = b where A is spd. You can write down the equivalent left-preconditioned system M^{-1}Ax = M^{-1}b but of course this doesn't work directly with CG because M^{-1}A is no longer spd in general. However, if M^{-1} is spd, so is M and you can use it to define an inner product. In this inner product, M{-1}A *is* spd. Here assume real scalars for simplicity: _M = _M = = = = _M So, you can write use CG, using this inner product everywhere. You don't in fact ever need to apply M, just M^{-1}, and you arrive at standard preconditioned CG (See these notes [1]). This is what's in PETSc as left-preconditioned CG, the only option. This is a bit different from the way left preconditioning works for (say) GMRES. With GMRES you could assemble A' = M^{-1}A and b' = M^{-1}b and pass A' and b' to black-box (unpreconditioned) GMRES. With CG, you fundamentally need to provide both M^{-1} and A (and fortunately PETSc lets you do exactly this). This then invites the natural question about applying the same logic to the right preconditioned system. I saw this as an exercise in the same course notes as above, from J. M. Melenk at TU Wien [1] . Consider the right-preconditioned system AM^{-1}y = b, x = M^{-1}y . You can note that AM^-1 is spd in the M^{-1} inner product, and again write down CG for the preconditioned system, now using M^{-1} inner products everywhere. You can then (as in this photo which I'm too lazy to transcribe [2]), introduce transformed variables z = M^{-1}r, q = M^{-1}p, x = M^{-1}y and work things out and arrive at ... standard preconditioned CG! Thus the conclusion is that there really is only one way to precondition CG. You can derive it as 1. left-preconditioned CG in the M norm, or 2. right-preconditioned CG in the M^{-1} norm, or 3. symmetrically-preconditioned CG (with M^{-1/2} which you don't need to explicitly form), or 4. standard CG with a new inner product. The last option is, I think, the most satisfying and fundamental. The recent M?lek and Strako? book [3] makes a beautiful case as to why, and indeed Hestenes himself first wrote about this in 1956, four years after he and Stiefel popularized CG and before the term "preconditioning" even existed [4]. So, in terms of concrete things to do with PETSc, I stand by my feeling that calling CG preconditioning "left preconditioning" is somehow misleading. Rather, I'd suggest calling the relevant value of PCSide something like PC_SPD . This i) reminds the user of the fact that the preconditioner has to be spd, ii) removes the asymmetry of calling it a left preconditioner when you could equally-well call it a right preconditioner, and iii) highlights the fact that preconditioning is different with CG than with methods like GMRES. PC_LEFT could be deprecated for CG before removing support. If this seems reasonable, I can make a PR with these changes and updates to the man page (and should also think about what other KSPs this applies to - probably KSPMINRES and others). [1] http://www.asc.tuwien.ac.at/~melenk/teach/iterative/ [2] http://patricksanan.com/rpcg.jpg [3] http://bookstore.siam.org/sl01/ [4] https://scholar.google.ch/scholar?q=M.+Hestenes+1956.+The+Conjugate+Gradient+Method+for+Solving+Linear+Systems On Wed, Mar 8, 2017 at 6:57 PM, Barry Smith wrote: > > Patrick, > > Thanks, this is interesting, we should try to add material to the KSPCG page to capture some of the subtleties that I admit after 28 years I still don't really understand. The paper A TAXONOMY FOR CONJUGATE GRADIENT METHODS (attached) has some interesting discussion (particularly page 1548 "Therefore, only left preconditioning need be considered: Right preconditioning may be effected by incorporating it into the left preconditioner and inner product." I don't know exactly what this means in practical terms in respect to code. (In PETSc KSP we don't explicitly talk about, or use a separate "inner product" in the Krylov methods, we only have the concept of operator and preconditioner operator.) > > Remembering vaguely, perhaps incorrectly, from a real long time ago "left preconditioning with a spd M is just the unpreconditioned cg in the M inner product" while "right preconditioning with M is unpreconditioned cg in a M^-1 inner product". If this is correct then it implies (I think) that right preconditioned CG would produce a different set of iterates than left preconditioning and hence is "missing" in terms of completeness from PETSc KSP. > > Barry > > > > >> On Mar 8, 2017, at 7:37 PM, Patrick Sanan wrote: >> >> On Wed, Mar 8, 2017 at 11:12 AM, Barry Smith wrote: >>> >>>> On Mar 8, 2017, at 10:47 AM, Kong, Fande wrote: >>>> >>>> Thanks Barry, >>>> >>>> We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. >>>> >>>> I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? >>> >>> No, mathematically it makes sense to do it on the right. It is just that the PETSc code was never written to support it on the right. One reason is that CG is interesting that you can run with the true residual or the preconditioned residual with left preconditioning, hence less incentive to ever bother writing it to support right preconditioning. For completeness we should support right as well as symmetric. >> >> For standard CG preconditioning, which PETSc calls left >> preconditioning, you use a s.p.d. preconditioner M to define an inner >> product in the algorithm, and end up finding iterates x_k in K_k(MA; >> Mb). That isn't quite the same as left-preconditioned GMRES, where you >> apply standard GMRES to the equivalent system MAx=Mb, and also end up >> finding iterates in K_k(MA,Mb). This wouldn't work for CG because MA >> isn't s.p.d. in general, even if M and A are. >> >> Standard CG preconditioning is often motivated as a clever way to do >> symmetric preconditioning, E^TAEy = E^Tb, x=Ey, without ever needing E >> explicitly, using only M=EE^T . y_k lives in K_k(E^TAE,E^Tb) and thus >> x_k again lives in K_k(MA;Mb). >> >> Thus, it's not clear that there is an candidate for a >> right-preconditioned CG variant, as what PETSc calls "left" >> preconditioning doesn't arise in the same way that it does for other >> Krylov methods, namely using the standard algorithm on MAx=Mb. For >> GMRES you would get a right-preconditioned variant by looking at the >> transformed system AMy=b, x = My. This means that y_k lives in >> K_k(AM,b), so x lives in K_k(MA,Mb), as before. For CG, AM wouldn't be >> spd in general so this approach wouldn't make sense. >> >> Another way to look at the difference in "left" preconditioning >> between GMRES and CG is that >> >> - introducing left preconditioning for GMRES alters both the Krylov >> subspaces *and* the optimality condition: you go from minimizing || b >> - Ax_k ||_2 over K_k(A;b) to minimizing || M (b-Ax_k) ||_2 over >> K_k(MA;Mb). >> >> - introducing "left" preconditioning for CG alters *only* the Krylov >> subspaces: you always minimize || x - x_k ||_A , but change the space >> from K_k(A;b) to K_k(MA;Mb). >> >> Thus, my impression is that it's misleading to call standard CG >> preconditioning "left" preconditioning in PETSc - someone might think >> of GMRES and naturally ask why there is no right preconditioning. >> >> One might define a new entry in PCSide to be used with CG and friends. >> I can't think of any slam dunk suggestions yet, but something in the >> genre of PC_INNERPRODUCT, PC_METRIC, PC_CG, or PC_IMPLICITSYMMETRIC, >> perhaps. >> >> >>> >>> Barry >>> >>>> >>>> Fande, >>>> >>>> On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: >>>> >>>> Please tell us how you got this output. >>>> >>>> PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. >>>> >>>> Barry >>>> >>>>> On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: >>>>> >>>>> Hi All, >>>>> >>>>> The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? >>>>> >>>>> >>>>> >>>>> 0 Nonlinear |R| = 1.732051e+00 >>>>> 0 Linear |R| = 0.000000e+00 >>>>> 1 Linear |R| = 0.000000e+00 >>>>> 2 Linear |R| = 0.000000e+00 >>>>> 3 Linear |R| = 0.000000e+00 >>>>> 4 Linear |R| = 0.000000e+00 >>>>> 5 Linear |R| = 0.000000e+00 >>>>> 6 Linear |R| = 0.000000e+00 >>>>> 1 Nonlinear |R| = 1.769225e-08 >>>>> 0 Linear |R| = 0.000000e+00 >>>>> 1 Linear |R| = 0.000000e+00 >>>>> 2 Linear |R| = 0.000000e+00 >>>>> 3 Linear |R| = 0.000000e+00 >>>>> 4 Linear |R| = 0.000000e+00 >>>>> 5 Linear |R| = 0.000000e+00 >>>>> 6 Linear |R| = 0.000000e+00 >>>>> 7 Linear |R| = 0.000000e+00 >>>>> 8 Linear |R| = 0.000000e+00 >>>>> 9 Linear |R| = 0.000000e+00 >>>>> 10 Linear |R| = 0.000000e+00 >>>>> 2 Nonlinear |R| = 0.000000e+00 >>>>> SNES Object: 1 MPI processes >>>>> type: newtonls >>>>> maximum iterations=50, maximum function evaluations=10000 >>>>> tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 >>>>> total number of linear solver iterations=18 >>>>> total number of function evaluations=23 >>>>> norm schedule ALWAYS >>>>> SNESLineSearch Object: 1 MPI processes >>>>> type: bt >>>>> interpolation: cubic >>>>> alpha=1.000000e-04 >>>>> maxstep=1.000000e+08, minlambda=1.000000e-12 >>>>> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 >>>>> maximum iterations=40 >>>>> KSP Object: 1 MPI processes >>>>> type: cg >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>>>> right preconditioning >>>>> using NONE norm type for convergence test >>>>> PC Object: 1 MPI processes >>>>> type: hypre >>>>> HYPRE BoomerAMG preconditioning >>>>> HYPRE BoomerAMG: Cycle type V >>>>> HYPRE BoomerAMG: Maximum number of levels 25 >>>>> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 >>>>> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. >>>>> HYPRE BoomerAMG: Threshold for strong coupling 0.25 >>>>> HYPRE BoomerAMG: Interpolation truncation factor 0. >>>>> HYPRE BoomerAMG: Interpolation: max elements per row 0 >>>>> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 >>>>> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 >>>>> HYPRE BoomerAMG: Maximum row sums 0.9 >>>>> HYPRE BoomerAMG: Sweeps down 1 >>>>> HYPRE BoomerAMG: Sweeps up 1 >>>>> HYPRE BoomerAMG: Sweeps on coarse 1 >>>>> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi >>>>> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi >>>>> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination >>>>> HYPRE BoomerAMG: Relax weight (all) 1. >>>>> HYPRE BoomerAMG: Outer relax weight (all) 1. >>>>> HYPRE BoomerAMG: Using CF-relaxation >>>>> HYPRE BoomerAMG: Not using more complex smoothers. >>>>> HYPRE BoomerAMG: Measure type local >>>>> HYPRE BoomerAMG: Coarsen type Falgout >>>>> HYPRE BoomerAMG: Interpolation type classical >>>>> linear system matrix followed by preconditioner matrix: >>>>> Mat Object: 1 MPI processes >>>>> type: mffd >>>>> rows=9, cols=9 >>>>> Matrix-free approximation: >>>>> err=1.49012e-08 (relative error in function evaluation) >>>>> Using wp compute h routine >>>>> Does not compute normU >>>>> Mat Object: () 1 MPI processes >>>>> type: seqaij >>>>> rows=9, cols=9 >>>>> total: nonzeros=49, allocated nonzeros=49 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> not using I-node routines >>>>> >>>>> Fande, > > On Thu, Mar 9, 2017 at 3:57 AM, Barry Smith wrote: > > Patrick, > > Thanks, this is interesting, we should try to add material to the KSPCG page to capture some of the subtleties that I admit after 28 years I still don't really understand. The paper A TAXONOMY FOR CONJUGATE GRADIENT METHODS (attached) has some interesting discussion (particularly page 1548 "Therefore, only left preconditioning need be considered: Right preconditioning may be effected by incorporating it into the left preconditioner and inner product." I don't know exactly what this means in practical terms in respect to code. (In PETSc KSP we don't explicitly talk about, or use a separate "inner product" in the Krylov methods, we only have the concept of operator and preconditioner operator.) > > Remembering vaguely, perhaps incorrectly, from a real long time ago "left preconditioning with a spd M is just the unpreconditioned cg in the M inner product" while "right preconditioning with M is unpreconditioned cg in a M^-1 inner product". If this is correct then it implies (I think) that right preconditioned CG would produce a different set of iterates than left preconditioning and hence is "missing" in terms of completeness from PETSc KSP. > > Barry > > > > >> On Mar 8, 2017, at 7:37 PM, Patrick Sanan wrote: >> >> On Wed, Mar 8, 2017 at 11:12 AM, Barry Smith wrote: >>> >>>> On Mar 8, 2017, at 10:47 AM, Kong, Fande wrote: >>>> >>>> Thanks Barry, >>>> >>>> We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. >>>> >>>> I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? >>> >>> No, mathematically it makes sense to do it on the right. It is just that the PETSc code was never written to support it on the right. One reason is that CG is interesting that you can run with the true residual or the preconditioned residual with left preconditioning, hence less incentive to ever bother writing it to support right preconditioning. For completeness we should support right as well as symmetric. >> >> For standard CG preconditioning, which PETSc calls left >> preconditioning, you use a s.p.d. preconditioner M to define an inner >> product in the algorithm, and end up finding iterates x_k in K_k(MA; >> Mb). That isn't quite the same as left-preconditioned GMRES, where you >> apply standard GMRES to the equivalent system MAx=Mb, and also end up >> finding iterates in K_k(MA,Mb). This wouldn't work for CG because MA >> isn't s.p.d. in general, even if M and A are. >> >> Standard CG preconditioning is often motivated as a clever way to do >> symmetric preconditioning, E^TAEy = E^Tb, x=Ey, without ever needing E >> explicitly, using only M=EE^T . y_k lives in K_k(E^TAE,E^Tb) and thus >> x_k again lives in K_k(MA;Mb). >> >> Thus, it's not clear that there is an candidate for a >> right-preconditioned CG variant, as what PETSc calls "left" >> preconditioning doesn't arise in the same way that it does for other >> Krylov methods, namely using the standard algorithm on MAx=Mb. For >> GMRES you would get a right-preconditioned variant by looking at the >> transformed system AMy=b, x = My. This means that y_k lives in >> K_k(AM,b), so x lives in K_k(MA,Mb), as before. For CG, AM wouldn't be >> spd in general so this approach wouldn't make sense. >> >> Another way to look at the difference in "left" preconditioning >> between GMRES and CG is that >> >> - introducing left preconditioning for GMRES alters both the Krylov >> subspaces *and* the optimality condition: you go from minimizing || b >> - Ax_k ||_2 over K_k(A;b) to minimizing || M (b-Ax_k) ||_2 over >> K_k(MA;Mb). >> >> - introducing "left" preconditioning for CG alters *only* the Krylov >> subspaces: you always minimize || x - x_k ||_A , but change the space >> from K_k(A;b) to K_k(MA;Mb). >> >> Thus, my impression is that it's misleading to call standard CG >> preconditioning "left" preconditioning in PETSc - someone might think >> of GMRES and naturally ask why there is no right preconditioning. >> >> One might define a new entry in PCSide to be used with CG and friends. >> I can't think of any slam dunk suggestions yet, but something in the >> genre of PC_INNERPRODUCT, PC_METRIC, PC_CG, or PC_IMPLICITSYMMETRIC, >> perhaps. >> >> >>> >>> Barry >>> >>>> >>>> Fande, >>>> >>>> On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: >>>> >>>> Please tell us how you got this output. >>>> >>>> PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. >>>> >>>> Barry >>>> >>>>> On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: >>>>> >>>>> Hi All, >>>>> >>>>> The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? >>>>> >>>>> >>>>> >>>>> 0 Nonlinear |R| = 1.732051e+00 >>>>> 0 Linear |R| = 0.000000e+00 >>>>> 1 Linear |R| = 0.000000e+00 >>>>> 2 Linear |R| = 0.000000e+00 >>>>> 3 Linear |R| = 0.000000e+00 >>>>> 4 Linear |R| = 0.000000e+00 >>>>> 5 Linear |R| = 0.000000e+00 >>>>> 6 Linear |R| = 0.000000e+00 >>>>> 1 Nonlinear |R| = 1.769225e-08 >>>>> 0 Linear |R| = 0.000000e+00 >>>>> 1 Linear |R| = 0.000000e+00 >>>>> 2 Linear |R| = 0.000000e+00 >>>>> 3 Linear |R| = 0.000000e+00 >>>>> 4 Linear |R| = 0.000000e+00 >>>>> 5 Linear |R| = 0.000000e+00 >>>>> 6 Linear |R| = 0.000000e+00 >>>>> 7 Linear |R| = 0.000000e+00 >>>>> 8 Linear |R| = 0.000000e+00 >>>>> 9 Linear |R| = 0.000000e+00 >>>>> 10 Linear |R| = 0.000000e+00 >>>>> 2 Nonlinear |R| = 0.000000e+00 >>>>> SNES Object: 1 MPI processes >>>>> type: newtonls >>>>> maximum iterations=50, maximum function evaluations=10000 >>>>> tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 >>>>> total number of linear solver iterations=18 >>>>> total number of function evaluations=23 >>>>> norm schedule ALWAYS >>>>> SNESLineSearch Object: 1 MPI processes >>>>> type: bt >>>>> interpolation: cubic >>>>> alpha=1.000000e-04 >>>>> maxstep=1.000000e+08, minlambda=1.000000e-12 >>>>> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 >>>>> maximum iterations=40 >>>>> KSP Object: 1 MPI processes >>>>> type: cg >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>>>> right preconditioning >>>>> using NONE norm type for convergence test >>>>> PC Object: 1 MPI processes >>>>> type: hypre >>>>> HYPRE BoomerAMG preconditioning >>>>> HYPRE BoomerAMG: Cycle type V >>>>> HYPRE BoomerAMG: Maximum number of levels 25 >>>>> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 >>>>> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. >>>>> HYPRE BoomerAMG: Threshold for strong coupling 0.25 >>>>> HYPRE BoomerAMG: Interpolation truncation factor 0. >>>>> HYPRE BoomerAMG: Interpolation: max elements per row 0 >>>>> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 >>>>> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 >>>>> HYPRE BoomerAMG: Maximum row sums 0.9 >>>>> HYPRE BoomerAMG: Sweeps down 1 >>>>> HYPRE BoomerAMG: Sweeps up 1 >>>>> HYPRE BoomerAMG: Sweeps on coarse 1 >>>>> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi >>>>> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi >>>>> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination >>>>> HYPRE BoomerAMG: Relax weight (all) 1. >>>>> HYPRE BoomerAMG: Outer relax weight (all) 1. >>>>> HYPRE BoomerAMG: Using CF-relaxation >>>>> HYPRE BoomerAMG: Not using more complex smoothers. >>>>> HYPRE BoomerAMG: Measure type local >>>>> HYPRE BoomerAMG: Coarsen type Falgout >>>>> HYPRE BoomerAMG: Interpolation type classical >>>>> linear system matrix followed by preconditioner matrix: >>>>> Mat Object: 1 MPI processes >>>>> type: mffd >>>>> rows=9, cols=9 >>>>> Matrix-free approximation: >>>>> err=1.49012e-08 (relative error in function evaluation) >>>>> Using wp compute h routine >>>>> Does not compute normU >>>>> Mat Object: () 1 MPI processes >>>>> type: seqaij >>>>> rows=9, cols=9 >>>>> total: nonzeros=49, allocated nonzeros=49 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> not using I-node routines >>>>> >>>>> Fande, > > From bsmith at mcs.anl.gov Sat Mar 18 16:32:50 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 18 Mar 2017 16:32:50 -0500 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> <6475B307-B5BC-493D-BF9F-FDA19C2BB932@mcs.anl.gov> Message-ID: Thanks for working out the details. It would be worth putting a subset of this material with references in the PCCG manual page: including > precondition CG. You can derive it as > 1. left-preconditioned CG in the M norm, or > 2. right-preconditioned CG in the M^{-1} norm, or > 3. symmetrically-preconditioned CG (with M^{-1/2} which you don't need > to explicitly form), or > 4. standard CG with a new inner product. I don't want to introduce PC_SPD and remove PC_LEFT for CG. Here is my reasoning In the actual implementation (not the mathematical presentation) with all of the methods, the preconditioner B is applied before (right) or after (left) the operator is applied (and in the actual implementations the regular VecDot() is used for inner products). This defines clearly and simply the meaning of left and right preconditioning* (in PETSc language) and you can conclude (based on your analysis) that CG cannot support right preconditioning (by this definition). Barry * You can argue that the PETSc language definition is Mickey Mouse(t) but I think it serves the purpose of PETSc users better than the more abstract discussion where CG has both a inner product and a preconditioner as "free" parameters while other methods such as GMRES only have the preconditioner. I don't think anyone using KSP needs to even know about the deeper relationships so introducing them in the API is not productive. > On Mar 18, 2017, at 4:09 PM, Patrick Sanan wrote: > > There is indeed a way to interpret preconditioned CG as left preconditioning: > > Consider > > Ax = b > > where A is spd. > > You can write down the equivalent left-preconditioned system > > M^{-1}Ax = M^{-1}b > > but of course this doesn't work directly with CG because M^{-1}A is no > longer spd in general. > > However, if M^{-1} is spd, so is M and you can use it to define an > inner product. In this inner product, M{-1}A *is* spd. Here assume > real scalars for simplicity: > > _M = _M = = = = _M > > So, you can write use CG, using this inner product everywhere. You > don't in fact ever need to apply M, just M^{-1}, and you arrive at > standard preconditioned CG (See these notes [1]). > > This is what's in PETSc as left-preconditioned CG, the only option. > > This is a bit different from the way left preconditioning works for > (say) GMRES. With GMRES you could assemble A' = M^{-1}A and b' = > M^{-1}b and pass A' and b' to black-box (unpreconditioned) GMRES. With > CG, you fundamentally need to provide both M^{-1} and A (and > fortunately PETSc lets you do exactly this). > > This then invites the natural question about applying the same logic > to the right preconditioned system. I saw this as an exercise in the > same course notes as above, from J. M. Melenk at TU Wien [1] . > > Consider the right-preconditioned system > > AM^{-1}y = b, x = M^{-1}y . > > You can note that AM^-1 is spd in the M^{-1} inner product, and again > write down CG for the preconditioned system, now using M^{-1} inner > products everywhere. > > You can then (as in this photo which I'm too lazy to transcribe [2]), > introduce transformed variables z = M^{-1}r, q = M^{-1}p, x = M^{-1}y > and work things out and arrive at ... standard preconditioned CG! > > Thus the conclusion is that there really is only one way to > precondition CG. You can derive it as > 1. left-preconditioned CG in the M norm, or > 2. right-preconditioned CG in the M^{-1} norm, or > 3. symmetrically-preconditioned CG (with M^{-1/2} which you don't need > to explicitly form), or > 4. standard CG with a new inner product. > > The last option is, I think, the most satisfying and fundamental. The > recent M?lek and Strako? book [3] makes a beautiful case as to why, > and indeed Hestenes himself first wrote about this in 1956, four years > after he and Stiefel popularized CG and before the term > "preconditioning" even existed [4]. > > So, in terms of concrete things to do with PETSc, I stand by my > feeling that calling CG preconditioning "left preconditioning" is > somehow misleading. > > Rather, I'd suggest calling the relevant value of PCSide something > like PC_SPD . This > i) reminds the user of the fact that the preconditioner has to be spd, > ii) removes the asymmetry of calling it a left preconditioner when you > could equally-well call it a right preconditioner, and > iii) highlights the fact that preconditioning is different with CG > than with methods like GMRES. > > PC_LEFT could be deprecated for CG before removing support. > > If this seems reasonable, I can make a PR with these changes and > updates to the man page (and should also think about what other KSPs > this applies to - probably KSPMINRES and others). > > [1] http://www.asc.tuwien.ac.at/~melenk/teach/iterative/ > [2] http://patricksanan.com/rpcg.jpg > [3] http://bookstore.siam.org/sl01/ > [4] https://scholar.google.ch/scholar?q=M.+Hestenes+1956.+The+Conjugate+Gradient+Method+for+Solving+Linear+Systems > > > > On Wed, Mar 8, 2017 at 6:57 PM, Barry Smith wrote: >> >> Patrick, >> >> Thanks, this is interesting, we should try to add material to the KSPCG page to capture some of the subtleties that I admit after 28 years I still don't really understand. The paper A TAXONOMY FOR CONJUGATE GRADIENT METHODS (attached) has some interesting discussion (particularly page 1548 "Therefore, only left preconditioning need be considered: Right preconditioning may be effected by incorporating it into the left preconditioner and inner product." I don't know exactly what this means in practical terms in respect to code. (In PETSc KSP we don't explicitly talk about, or use a separate "inner product" in the Krylov methods, we only have the concept of operator and preconditioner operator.) >> >> Remembering vaguely, perhaps incorrectly, from a real long time ago "left preconditioning with a spd M is just the unpreconditioned cg in the M inner product" while "right preconditioning with M is unpreconditioned cg in a M^-1 inner product". If this is correct then it implies (I think) that right preconditioned CG would produce a different set of iterates than left preconditioning and hence is "missing" in terms of completeness from PETSc KSP. >> >> Barry >> >> >> >> >>> On Mar 8, 2017, at 7:37 PM, Patrick Sanan wrote: >>> >>> On Wed, Mar 8, 2017 at 11:12 AM, Barry Smith wrote: >>>> >>>>> On Mar 8, 2017, at 10:47 AM, Kong, Fande wrote: >>>>> >>>>> Thanks Barry, >>>>> >>>>> We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. >>>>> >>>>> I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? >>>> >>>> No, mathematically it makes sense to do it on the right. It is just that the PETSc code was never written to support it on the right. One reason is that CG is interesting that you can run with the true residual or the preconditioned residual with left preconditioning, hence less incentive to ever bother writing it to support right preconditioning. For completeness we should support right as well as symmetric. >>> >>> For standard CG preconditioning, which PETSc calls left >>> preconditioning, you use a s.p.d. preconditioner M to define an inner >>> product in the algorithm, and end up finding iterates x_k in K_k(MA; >>> Mb). That isn't quite the same as left-preconditioned GMRES, where you >>> apply standard GMRES to the equivalent system MAx=Mb, and also end up >>> finding iterates in K_k(MA,Mb). This wouldn't work for CG because MA >>> isn't s.p.d. in general, even if M and A are. >>> >>> Standard CG preconditioning is often motivated as a clever way to do >>> symmetric preconditioning, E^TAEy = E^Tb, x=Ey, without ever needing E >>> explicitly, using only M=EE^T . y_k lives in K_k(E^TAE,E^Tb) and thus >>> x_k again lives in K_k(MA;Mb). >>> >>> Thus, it's not clear that there is an candidate for a >>> right-preconditioned CG variant, as what PETSc calls "left" >>> preconditioning doesn't arise in the same way that it does for other >>> Krylov methods, namely using the standard algorithm on MAx=Mb. For >>> GMRES you would get a right-preconditioned variant by looking at the >>> transformed system AMy=b, x = My. This means that y_k lives in >>> K_k(AM,b), so x lives in K_k(MA,Mb), as before. For CG, AM wouldn't be >>> spd in general so this approach wouldn't make sense. >>> >>> Another way to look at the difference in "left" preconditioning >>> between GMRES and CG is that >>> >>> - introducing left preconditioning for GMRES alters both the Krylov >>> subspaces *and* the optimality condition: you go from minimizing || b >>> - Ax_k ||_2 over K_k(A;b) to minimizing || M (b-Ax_k) ||_2 over >>> K_k(MA;Mb). >>> >>> - introducing "left" preconditioning for CG alters *only* the Krylov >>> subspaces: you always minimize || x - x_k ||_A , but change the space >>> from K_k(A;b) to K_k(MA;Mb). >>> >>> Thus, my impression is that it's misleading to call standard CG >>> preconditioning "left" preconditioning in PETSc - someone might think >>> of GMRES and naturally ask why there is no right preconditioning. >>> >>> One might define a new entry in PCSide to be used with CG and friends. >>> I can't think of any slam dunk suggestions yet, but something in the >>> genre of PC_INNERPRODUCT, PC_METRIC, PC_CG, or PC_IMPLICITSYMMETRIC, >>> perhaps. >>> >>> >>>> >>>> Barry >>>> >>>>> >>>>> Fande, >>>>> >>>>> On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: >>>>> >>>>> Please tell us how you got this output. >>>>> >>>>> PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. >>>>> >>>>> Barry >>>>> >>>>>> On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: >>>>>> >>>>>> Hi All, >>>>>> >>>>>> The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? >>>>>> >>>>>> >>>>>> >>>>>> 0 Nonlinear |R| = 1.732051e+00 >>>>>> 0 Linear |R| = 0.000000e+00 >>>>>> 1 Linear |R| = 0.000000e+00 >>>>>> 2 Linear |R| = 0.000000e+00 >>>>>> 3 Linear |R| = 0.000000e+00 >>>>>> 4 Linear |R| = 0.000000e+00 >>>>>> 5 Linear |R| = 0.000000e+00 >>>>>> 6 Linear |R| = 0.000000e+00 >>>>>> 1 Nonlinear |R| = 1.769225e-08 >>>>>> 0 Linear |R| = 0.000000e+00 >>>>>> 1 Linear |R| = 0.000000e+00 >>>>>> 2 Linear |R| = 0.000000e+00 >>>>>> 3 Linear |R| = 0.000000e+00 >>>>>> 4 Linear |R| = 0.000000e+00 >>>>>> 5 Linear |R| = 0.000000e+00 >>>>>> 6 Linear |R| = 0.000000e+00 >>>>>> 7 Linear |R| = 0.000000e+00 >>>>>> 8 Linear |R| = 0.000000e+00 >>>>>> 9 Linear |R| = 0.000000e+00 >>>>>> 10 Linear |R| = 0.000000e+00 >>>>>> 2 Nonlinear |R| = 0.000000e+00 >>>>>> SNES Object: 1 MPI processes >>>>>> type: newtonls >>>>>> maximum iterations=50, maximum function evaluations=10000 >>>>>> tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 >>>>>> total number of linear solver iterations=18 >>>>>> total number of function evaluations=23 >>>>>> norm schedule ALWAYS >>>>>> SNESLineSearch Object: 1 MPI processes >>>>>> type: bt >>>>>> interpolation: cubic >>>>>> alpha=1.000000e-04 >>>>>> maxstep=1.000000e+08, minlambda=1.000000e-12 >>>>>> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 >>>>>> maximum iterations=40 >>>>>> KSP Object: 1 MPI processes >>>>>> type: cg >>>>>> maximum iterations=10000, initial guess is zero >>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>>>>> right preconditioning >>>>>> using NONE norm type for convergence test >>>>>> PC Object: 1 MPI processes >>>>>> type: hypre >>>>>> HYPRE BoomerAMG preconditioning >>>>>> HYPRE BoomerAMG: Cycle type V >>>>>> HYPRE BoomerAMG: Maximum number of levels 25 >>>>>> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 >>>>>> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. >>>>>> HYPRE BoomerAMG: Threshold for strong coupling 0.25 >>>>>> HYPRE BoomerAMG: Interpolation truncation factor 0. >>>>>> HYPRE BoomerAMG: Interpolation: max elements per row 0 >>>>>> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 >>>>>> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 >>>>>> HYPRE BoomerAMG: Maximum row sums 0.9 >>>>>> HYPRE BoomerAMG: Sweeps down 1 >>>>>> HYPRE BoomerAMG: Sweeps up 1 >>>>>> HYPRE BoomerAMG: Sweeps on coarse 1 >>>>>> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi >>>>>> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi >>>>>> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination >>>>>> HYPRE BoomerAMG: Relax weight (all) 1. >>>>>> HYPRE BoomerAMG: Outer relax weight (all) 1. >>>>>> HYPRE BoomerAMG: Using CF-relaxation >>>>>> HYPRE BoomerAMG: Not using more complex smoothers. >>>>>> HYPRE BoomerAMG: Measure type local >>>>>> HYPRE BoomerAMG: Coarsen type Falgout >>>>>> HYPRE BoomerAMG: Interpolation type classical >>>>>> linear system matrix followed by preconditioner matrix: >>>>>> Mat Object: 1 MPI processes >>>>>> type: mffd >>>>>> rows=9, cols=9 >>>>>> Matrix-free approximation: >>>>>> err=1.49012e-08 (relative error in function evaluation) >>>>>> Using wp compute h routine >>>>>> Does not compute normU >>>>>> Mat Object: () 1 MPI processes >>>>>> type: seqaij >>>>>> rows=9, cols=9 >>>>>> total: nonzeros=49, allocated nonzeros=49 >>>>>> total number of mallocs used during MatSetValues calls =0 >>>>>> not using I-node routines >>>>>> >>>>>> Fande, >> >> > > On Thu, Mar 9, 2017 at 3:57 AM, Barry Smith wrote: >> >> Patrick, >> >> Thanks, this is interesting, we should try to add material to the KSPCG page to capture some of the subtleties that I admit after 28 years I still don't really understand. The paper A TAXONOMY FOR CONJUGATE GRADIENT METHODS (attached) has some interesting discussion (particularly page 1548 "Therefore, only left preconditioning need be considered: Right preconditioning may be effected by incorporating it into the left preconditioner and inner product." I don't know exactly what this means in practical terms in respect to code. (In PETSc KSP we don't explicitly talk about, or use a separate "inner product" in the Krylov methods, we only have the concept of operator and preconditioner operator.) >> >> Remembering vaguely, perhaps incorrectly, from a real long time ago "left preconditioning with a spd M is just the unpreconditioned cg in the M inner product" while "right preconditioning with M is unpreconditioned cg in a M^-1 inner product". If this is correct then it implies (I think) that right preconditioned CG would produce a different set of iterates than left preconditioning and hence is "missing" in terms of completeness from PETSc KSP. >> >> Barry >> >> >> >> >>> On Mar 8, 2017, at 7:37 PM, Patrick Sanan wrote: >>> >>> On Wed, Mar 8, 2017 at 11:12 AM, Barry Smith wrote: >>>> >>>>> On Mar 8, 2017, at 10:47 AM, Kong, Fande wrote: >>>>> >>>>> Thanks Barry, >>>>> >>>>> We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. >>>>> >>>>> I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? >>>> >>>> No, mathematically it makes sense to do it on the right. It is just that the PETSc code was never written to support it on the right. One reason is that CG is interesting that you can run with the true residual or the preconditioned residual with left preconditioning, hence less incentive to ever bother writing it to support right preconditioning. For completeness we should support right as well as symmetric. >>> >>> For standard CG preconditioning, which PETSc calls left >>> preconditioning, you use a s.p.d. preconditioner M to define an inner >>> product in the algorithm, and end up finding iterates x_k in K_k(MA; >>> Mb). That isn't quite the same as left-preconditioned GMRES, where you >>> apply standard GMRES to the equivalent system MAx=Mb, and also end up >>> finding iterates in K_k(MA,Mb). This wouldn't work for CG because MA >>> isn't s.p.d. in general, even if M and A are. >>> >>> Standard CG preconditioning is often motivated as a clever way to do >>> symmetric preconditioning, E^TAEy = E^Tb, x=Ey, without ever needing E >>> explicitly, using only M=EE^T . y_k lives in K_k(E^TAE,E^Tb) and thus >>> x_k again lives in K_k(MA;Mb). >>> >>> Thus, it's not clear that there is an candidate for a >>> right-preconditioned CG variant, as what PETSc calls "left" >>> preconditioning doesn't arise in the same way that it does for other >>> Krylov methods, namely using the standard algorithm on MAx=Mb. For >>> GMRES you would get a right-preconditioned variant by looking at the >>> transformed system AMy=b, x = My. This means that y_k lives in >>> K_k(AM,b), so x lives in K_k(MA,Mb), as before. For CG, AM wouldn't be >>> spd in general so this approach wouldn't make sense. >>> >>> Another way to look at the difference in "left" preconditioning >>> between GMRES and CG is that >>> >>> - introducing left preconditioning for GMRES alters both the Krylov >>> subspaces *and* the optimality condition: you go from minimizing || b >>> - Ax_k ||_2 over K_k(A;b) to minimizing || M (b-Ax_k) ||_2 over >>> K_k(MA;Mb). >>> >>> - introducing "left" preconditioning for CG alters *only* the Krylov >>> subspaces: you always minimize || x - x_k ||_A , but change the space >>> from K_k(A;b) to K_k(MA;Mb). >>> >>> Thus, my impression is that it's misleading to call standard CG >>> preconditioning "left" preconditioning in PETSc - someone might think >>> of GMRES and naturally ask why there is no right preconditioning. >>> >>> One might define a new entry in PCSide to be used with CG and friends. >>> I can't think of any slam dunk suggestions yet, but something in the >>> genre of PC_INNERPRODUCT, PC_METRIC, PC_CG, or PC_IMPLICITSYMMETRIC, >>> perhaps. >>> >>> >>>> >>>> Barry >>>> >>>>> >>>>> Fande, >>>>> >>>>> On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: >>>>> >>>>> Please tell us how you got this output. >>>>> >>>>> PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. >>>>> >>>>> Barry >>>>> >>>>>> On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: >>>>>> >>>>>> Hi All, >>>>>> >>>>>> The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? >>>>>> >>>>>> >>>>>> >>>>>> 0 Nonlinear |R| = 1.732051e+00 >>>>>> 0 Linear |R| = 0.000000e+00 >>>>>> 1 Linear |R| = 0.000000e+00 >>>>>> 2 Linear |R| = 0.000000e+00 >>>>>> 3 Linear |R| = 0.000000e+00 >>>>>> 4 Linear |R| = 0.000000e+00 >>>>>> 5 Linear |R| = 0.000000e+00 >>>>>> 6 Linear |R| = 0.000000e+00 >>>>>> 1 Nonlinear |R| = 1.769225e-08 >>>>>> 0 Linear |R| = 0.000000e+00 >>>>>> 1 Linear |R| = 0.000000e+00 >>>>>> 2 Linear |R| = 0.000000e+00 >>>>>> 3 Linear |R| = 0.000000e+00 >>>>>> 4 Linear |R| = 0.000000e+00 >>>>>> 5 Linear |R| = 0.000000e+00 >>>>>> 6 Linear |R| = 0.000000e+00 >>>>>> 7 Linear |R| = 0.000000e+00 >>>>>> 8 Linear |R| = 0.000000e+00 >>>>>> 9 Linear |R| = 0.000000e+00 >>>>>> 10 Linear |R| = 0.000000e+00 >>>>>> 2 Nonlinear |R| = 0.000000e+00 >>>>>> SNES Object: 1 MPI processes >>>>>> type: newtonls >>>>>> maximum iterations=50, maximum function evaluations=10000 >>>>>> tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 >>>>>> total number of linear solver iterations=18 >>>>>> total number of function evaluations=23 >>>>>> norm schedule ALWAYS >>>>>> SNESLineSearch Object: 1 MPI processes >>>>>> type: bt >>>>>> interpolation: cubic >>>>>> alpha=1.000000e-04 >>>>>> maxstep=1.000000e+08, minlambda=1.000000e-12 >>>>>> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 >>>>>> maximum iterations=40 >>>>>> KSP Object: 1 MPI processes >>>>>> type: cg >>>>>> maximum iterations=10000, initial guess is zero >>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>>>>> right preconditioning >>>>>> using NONE norm type for convergence test >>>>>> PC Object: 1 MPI processes >>>>>> type: hypre >>>>>> HYPRE BoomerAMG preconditioning >>>>>> HYPRE BoomerAMG: Cycle type V >>>>>> HYPRE BoomerAMG: Maximum number of levels 25 >>>>>> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 >>>>>> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. >>>>>> HYPRE BoomerAMG: Threshold for strong coupling 0.25 >>>>>> HYPRE BoomerAMG: Interpolation truncation factor 0. >>>>>> HYPRE BoomerAMG: Interpolation: max elements per row 0 >>>>>> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 >>>>>> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 >>>>>> HYPRE BoomerAMG: Maximum row sums 0.9 >>>>>> HYPRE BoomerAMG: Sweeps down 1 >>>>>> HYPRE BoomerAMG: Sweeps up 1 >>>>>> HYPRE BoomerAMG: Sweeps on coarse 1 >>>>>> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi >>>>>> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi >>>>>> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination >>>>>> HYPRE BoomerAMG: Relax weight (all) 1. >>>>>> HYPRE BoomerAMG: Outer relax weight (all) 1. >>>>>> HYPRE BoomerAMG: Using CF-relaxation >>>>>> HYPRE BoomerAMG: Not using more complex smoothers. >>>>>> HYPRE BoomerAMG: Measure type local >>>>>> HYPRE BoomerAMG: Coarsen type Falgout >>>>>> HYPRE BoomerAMG: Interpolation type classical >>>>>> linear system matrix followed by preconditioner matrix: >>>>>> Mat Object: 1 MPI processes >>>>>> type: mffd >>>>>> rows=9, cols=9 >>>>>> Matrix-free approximation: >>>>>> err=1.49012e-08 (relative error in function evaluation) >>>>>> Using wp compute h routine >>>>>> Does not compute normU >>>>>> Mat Object: () 1 MPI processes >>>>>> type: seqaij >>>>>> rows=9, cols=9 >>>>>> total: nonzeros=49, allocated nonzeros=49 >>>>>> total number of mallocs used during MatSetValues calls =0 >>>>>> not using I-node routines >>>>>> >>>>>> Fande, >> >> From patrick.sanan at gmail.com Sun Mar 19 05:08:00 2017 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Sun, 19 Mar 2017 11:08:00 +0100 Subject: [petsc-users] CG with right preconditioning supports NONE norm type only In-Reply-To: References: <2BD8F058-B5FA-4112-9447-9616966F3F8A@mcs.anl.gov> <6475B307-B5BC-493D-BF9F-FDA19C2BB932@mcs.anl.gov> Message-ID: On Sat, Mar 18, 2017 at 10:32 PM, Barry Smith wrote: > > Thanks for working out the details. It would be worth putting a subset of this material with references in the PCCG manual page: including > >> precondition CG. You can derive it as >> 1. left-preconditioned CG in the M norm, or >> 2. right-preconditioned CG in the M^{-1} norm, or >> 3. symmetrically-preconditioned CG (with M^{-1/2} which you don't need >> to explicitly form), or >> 4. standard CG with a new inner product. > > > I don't want to introduce PC_SPD and remove PC_LEFT for CG. Here is my reasoning > > In the actual implementation (not the mathematical presentation) with all of the methods, the preconditioner B is applied before (right) or after (left) the operator is applied (and in the actual implementations the regular VecDot() is used for inner products). This defines clearly and simply the meaning of left and right preconditioning* (in PETSc language) and you can conclude (based on your analysis) that CG cannot support right preconditioning (by this definition). > This makes sense, and the stakes aren't actually very high here, as there is ultimately only one way to do this! I'll make a PR which adds a brief note to the man page. > > Barry > > * You can argue that the PETSc language definition is Mickey Mouse(t) but I think it serves the purpose of PETSc users better than the more abstract discussion where CG has both a inner product and a preconditioner as "free" parameters while other methods such as GMRES only have the preconditioner. I don't think anyone using KSP needs to even know about the deeper relationships so introducing them in the API is not productive. > > >> On Mar 18, 2017, at 4:09 PM, Patrick Sanan wrote: >> >> There is indeed a way to interpret preconditioned CG as left preconditioning: >> >> Consider >> >> Ax = b >> >> where A is spd. >> >> You can write down the equivalent left-preconditioned system >> >> M^{-1}Ax = M^{-1}b >> >> but of course this doesn't work directly with CG because M^{-1}A is no >> longer spd in general. >> >> However, if M^{-1} is spd, so is M and you can use it to define an >> inner product. In this inner product, M{-1}A *is* spd. Here assume >> real scalars for simplicity: >> >> _M = _M = = = = _M >> >> So, you can write use CG, using this inner product everywhere. You >> don't in fact ever need to apply M, just M^{-1}, and you arrive at >> standard preconditioned CG (See these notes [1]). >> >> This is what's in PETSc as left-preconditioned CG, the only option. >> >> This is a bit different from the way left preconditioning works for >> (say) GMRES. With GMRES you could assemble A' = M^{-1}A and b' = >> M^{-1}b and pass A' and b' to black-box (unpreconditioned) GMRES. With >> CG, you fundamentally need to provide both M^{-1} and A (and >> fortunately PETSc lets you do exactly this). >> >> This then invites the natural question about applying the same logic >> to the right preconditioned system. I saw this as an exercise in the >> same course notes as above, from J. M. Melenk at TU Wien [1] . >> >> Consider the right-preconditioned system >> >> AM^{-1}y = b, x = M^{-1}y . >> >> You can note that AM^-1 is spd in the M^{-1} inner product, and again >> write down CG for the preconditioned system, now using M^{-1} inner >> products everywhere. >> >> You can then (as in this photo which I'm too lazy to transcribe [2]), >> introduce transformed variables z = M^{-1}r, q = M^{-1}p, x = M^{-1}y >> and work things out and arrive at ... standard preconditioned CG! >> >> Thus the conclusion is that there really is only one way to >> precondition CG. You can derive it as >> 1. left-preconditioned CG in the M norm, or >> 2. right-preconditioned CG in the M^{-1} norm, or >> 3. symmetrically-preconditioned CG (with M^{-1/2} which you don't need >> to explicitly form), or >> 4. standard CG with a new inner product. >> >> The last option is, I think, the most satisfying and fundamental. The >> recent M?lek and Strako? book [3] makes a beautiful case as to why, >> and indeed Hestenes himself first wrote about this in 1956, four years >> after he and Stiefel popularized CG and before the term >> "preconditioning" even existed [4]. >> >> So, in terms of concrete things to do with PETSc, I stand by my >> feeling that calling CG preconditioning "left preconditioning" is >> somehow misleading. >> >> Rather, I'd suggest calling the relevant value of PCSide something >> like PC_SPD . This >> i) reminds the user of the fact that the preconditioner has to be spd, >> ii) removes the asymmetry of calling it a left preconditioner when you >> could equally-well call it a right preconditioner, and >> iii) highlights the fact that preconditioning is different with CG >> than with methods like GMRES. >> >> PC_LEFT could be deprecated for CG before removing support. >> >> If this seems reasonable, I can make a PR with these changes and >> updates to the man page (and should also think about what other KSPs >> this applies to - probably KSPMINRES and others). >> >> [1] http://www.asc.tuwien.ac.at/~melenk/teach/iterative/ >> [2] http://patricksanan.com/rpcg.jpg >> [3] http://bookstore.siam.org/sl01/ >> [4] https://scholar.google.ch/scholar?q=M.+Hestenes+1956.+The+Conjugate+Gradient+Method+for+Solving+Linear+Systems >> >> >> >> On Wed, Mar 8, 2017 at 6:57 PM, Barry Smith wrote: >>> >>> Patrick, >>> >>> Thanks, this is interesting, we should try to add material to the KSPCG page to capture some of the subtleties that I admit after 28 years I still don't really understand. The paper A TAXONOMY FOR CONJUGATE GRADIENT METHODS (attached) has some interesting discussion (particularly page 1548 "Therefore, only left preconditioning need be considered: Right preconditioning may be effected by incorporating it into the left preconditioner and inner product." I don't know exactly what this means in practical terms in respect to code. (In PETSc KSP we don't explicitly talk about, or use a separate "inner product" in the Krylov methods, we only have the concept of operator and preconditioner operator.) >>> >>> Remembering vaguely, perhaps incorrectly, from a real long time ago "left preconditioning with a spd M is just the unpreconditioned cg in the M inner product" while "right preconditioning with M is unpreconditioned cg in a M^-1 inner product". If this is correct then it implies (I think) that right preconditioned CG would produce a different set of iterates than left preconditioning and hence is "missing" in terms of completeness from PETSc KSP. >>> >>> Barry >>> >>> >>> >>> >>>> On Mar 8, 2017, at 7:37 PM, Patrick Sanan wrote: >>>> >>>> On Wed, Mar 8, 2017 at 11:12 AM, Barry Smith wrote: >>>>> >>>>>> On Mar 8, 2017, at 10:47 AM, Kong, Fande wrote: >>>>>> >>>>>> Thanks Barry, >>>>>> >>>>>> We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. >>>>>> >>>>>> I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? >>>>> >>>>> No, mathematically it makes sense to do it on the right. It is just that the PETSc code was never written to support it on the right. One reason is that CG is interesting that you can run with the true residual or the preconditioned residual with left preconditioning, hence less incentive to ever bother writing it to support right preconditioning. For completeness we should support right as well as symmetric. >>>> >>>> For standard CG preconditioning, which PETSc calls left >>>> preconditioning, you use a s.p.d. preconditioner M to define an inner >>>> product in the algorithm, and end up finding iterates x_k in K_k(MA; >>>> Mb). That isn't quite the same as left-preconditioned GMRES, where you >>>> apply standard GMRES to the equivalent system MAx=Mb, and also end up >>>> finding iterates in K_k(MA,Mb). This wouldn't work for CG because MA >>>> isn't s.p.d. in general, even if M and A are. >>>> >>>> Standard CG preconditioning is often motivated as a clever way to do >>>> symmetric preconditioning, E^TAEy = E^Tb, x=Ey, without ever needing E >>>> explicitly, using only M=EE^T . y_k lives in K_k(E^TAE,E^Tb) and thus >>>> x_k again lives in K_k(MA;Mb). >>>> >>>> Thus, it's not clear that there is an candidate for a >>>> right-preconditioned CG variant, as what PETSc calls "left" >>>> preconditioning doesn't arise in the same way that it does for other >>>> Krylov methods, namely using the standard algorithm on MAx=Mb. For >>>> GMRES you would get a right-preconditioned variant by looking at the >>>> transformed system AMy=b, x = My. This means that y_k lives in >>>> K_k(AM,b), so x lives in K_k(MA,Mb), as before. For CG, AM wouldn't be >>>> spd in general so this approach wouldn't make sense. >>>> >>>> Another way to look at the difference in "left" preconditioning >>>> between GMRES and CG is that >>>> >>>> - introducing left preconditioning for GMRES alters both the Krylov >>>> subspaces *and* the optimality condition: you go from minimizing || b >>>> - Ax_k ||_2 over K_k(A;b) to minimizing || M (b-Ax_k) ||_2 over >>>> K_k(MA;Mb). >>>> >>>> - introducing "left" preconditioning for CG alters *only* the Krylov >>>> subspaces: you always minimize || x - x_k ||_A , but change the space >>>> from K_k(A;b) to K_k(MA;Mb). >>>> >>>> Thus, my impression is that it's misleading to call standard CG >>>> preconditioning "left" preconditioning in PETSc - someone might think >>>> of GMRES and naturally ask why there is no right preconditioning. >>>> >>>> One might define a new entry in PCSide to be used with CG and friends. >>>> I can't think of any slam dunk suggestions yet, but something in the >>>> genre of PC_INNERPRODUCT, PC_METRIC, PC_CG, or PC_IMPLICITSYMMETRIC, >>>> perhaps. >>>> >>>> >>>>> >>>>> Barry >>>>> >>>>>> >>>>>> Fande, >>>>>> >>>>>> On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: >>>>>> >>>>>> Please tell us how you got this output. >>>>>> >>>>>> PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. >>>>>> >>>>>> Barry >>>>>> >>>>>>> On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: >>>>>>> >>>>>>> Hi All, >>>>>>> >>>>>>> The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? >>>>>>> >>>>>>> >>>>>>> >>>>>>> 0 Nonlinear |R| = 1.732051e+00 >>>>>>> 0 Linear |R| = 0.000000e+00 >>>>>>> 1 Linear |R| = 0.000000e+00 >>>>>>> 2 Linear |R| = 0.000000e+00 >>>>>>> 3 Linear |R| = 0.000000e+00 >>>>>>> 4 Linear |R| = 0.000000e+00 >>>>>>> 5 Linear |R| = 0.000000e+00 >>>>>>> 6 Linear |R| = 0.000000e+00 >>>>>>> 1 Nonlinear |R| = 1.769225e-08 >>>>>>> 0 Linear |R| = 0.000000e+00 >>>>>>> 1 Linear |R| = 0.000000e+00 >>>>>>> 2 Linear |R| = 0.000000e+00 >>>>>>> 3 Linear |R| = 0.000000e+00 >>>>>>> 4 Linear |R| = 0.000000e+00 >>>>>>> 5 Linear |R| = 0.000000e+00 >>>>>>> 6 Linear |R| = 0.000000e+00 >>>>>>> 7 Linear |R| = 0.000000e+00 >>>>>>> 8 Linear |R| = 0.000000e+00 >>>>>>> 9 Linear |R| = 0.000000e+00 >>>>>>> 10 Linear |R| = 0.000000e+00 >>>>>>> 2 Nonlinear |R| = 0.000000e+00 >>>>>>> SNES Object: 1 MPI processes >>>>>>> type: newtonls >>>>>>> maximum iterations=50, maximum function evaluations=10000 >>>>>>> tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 >>>>>>> total number of linear solver iterations=18 >>>>>>> total number of function evaluations=23 >>>>>>> norm schedule ALWAYS >>>>>>> SNESLineSearch Object: 1 MPI processes >>>>>>> type: bt >>>>>>> interpolation: cubic >>>>>>> alpha=1.000000e-04 >>>>>>> maxstep=1.000000e+08, minlambda=1.000000e-12 >>>>>>> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 >>>>>>> maximum iterations=40 >>>>>>> KSP Object: 1 MPI processes >>>>>>> type: cg >>>>>>> maximum iterations=10000, initial guess is zero >>>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>>>>>> right preconditioning >>>>>>> using NONE norm type for convergence test >>>>>>> PC Object: 1 MPI processes >>>>>>> type: hypre >>>>>>> HYPRE BoomerAMG preconditioning >>>>>>> HYPRE BoomerAMG: Cycle type V >>>>>>> HYPRE BoomerAMG: Maximum number of levels 25 >>>>>>> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 >>>>>>> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. >>>>>>> HYPRE BoomerAMG: Threshold for strong coupling 0.25 >>>>>>> HYPRE BoomerAMG: Interpolation truncation factor 0. >>>>>>> HYPRE BoomerAMG: Interpolation: max elements per row 0 >>>>>>> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 >>>>>>> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 >>>>>>> HYPRE BoomerAMG: Maximum row sums 0.9 >>>>>>> HYPRE BoomerAMG: Sweeps down 1 >>>>>>> HYPRE BoomerAMG: Sweeps up 1 >>>>>>> HYPRE BoomerAMG: Sweeps on coarse 1 >>>>>>> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi >>>>>>> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi >>>>>>> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination >>>>>>> HYPRE BoomerAMG: Relax weight (all) 1. >>>>>>> HYPRE BoomerAMG: Outer relax weight (all) 1. >>>>>>> HYPRE BoomerAMG: Using CF-relaxation >>>>>>> HYPRE BoomerAMG: Not using more complex smoothers. >>>>>>> HYPRE BoomerAMG: Measure type local >>>>>>> HYPRE BoomerAMG: Coarsen type Falgout >>>>>>> HYPRE BoomerAMG: Interpolation type classical >>>>>>> linear system matrix followed by preconditioner matrix: >>>>>>> Mat Object: 1 MPI processes >>>>>>> type: mffd >>>>>>> rows=9, cols=9 >>>>>>> Matrix-free approximation: >>>>>>> err=1.49012e-08 (relative error in function evaluation) >>>>>>> Using wp compute h routine >>>>>>> Does not compute normU >>>>>>> Mat Object: () 1 MPI processes >>>>>>> type: seqaij >>>>>>> rows=9, cols=9 >>>>>>> total: nonzeros=49, allocated nonzeros=49 >>>>>>> total number of mallocs used during MatSetValues calls =0 >>>>>>> not using I-node routines >>>>>>> >>>>>>> Fande, >>> >>> >> >> On Thu, Mar 9, 2017 at 3:57 AM, Barry Smith wrote: >>> >>> Patrick, >>> >>> Thanks, this is interesting, we should try to add material to the KSPCG page to capture some of the subtleties that I admit after 28 years I still don't really understand. The paper A TAXONOMY FOR CONJUGATE GRADIENT METHODS (attached) has some interesting discussion (particularly page 1548 "Therefore, only left preconditioning need be considered: Right preconditioning may be effected by incorporating it into the left preconditioner and inner product." I don't know exactly what this means in practical terms in respect to code. (In PETSc KSP we don't explicitly talk about, or use a separate "inner product" in the Krylov methods, we only have the concept of operator and preconditioner operator.) >>> >>> Remembering vaguely, perhaps incorrectly, from a real long time ago "left preconditioning with a spd M is just the unpreconditioned cg in the M inner product" while "right preconditioning with M is unpreconditioned cg in a M^-1 inner product". If this is correct then it implies (I think) that right preconditioned CG would produce a different set of iterates than left preconditioning and hence is "missing" in terms of completeness from PETSc KSP. >>> >>> Barry >>> >>> >>> >>> >>>> On Mar 8, 2017, at 7:37 PM, Patrick Sanan wrote: >>>> >>>> On Wed, Mar 8, 2017 at 11:12 AM, Barry Smith wrote: >>>>> >>>>>> On Mar 8, 2017, at 10:47 AM, Kong, Fande wrote: >>>>>> >>>>>> Thanks Barry, >>>>>> >>>>>> We are using "KSPSetPCSide(ksp, pcside)" in the code. I just tried "-ksp_pc_side right", and petsc did not error out. >>>>>> >>>>>> I like to understand why CG does not work with right preconditioning? Mathematically, the right preconditioning does not make sense? >>>>> >>>>> No, mathematically it makes sense to do it on the right. It is just that the PETSc code was never written to support it on the right. One reason is that CG is interesting that you can run with the true residual or the preconditioned residual with left preconditioning, hence less incentive to ever bother writing it to support right preconditioning. For completeness we should support right as well as symmetric. >>>> >>>> For standard CG preconditioning, which PETSc calls left >>>> preconditioning, you use a s.p.d. preconditioner M to define an inner >>>> product in the algorithm, and end up finding iterates x_k in K_k(MA; >>>> Mb). That isn't quite the same as left-preconditioned GMRES, where you >>>> apply standard GMRES to the equivalent system MAx=Mb, and also end up >>>> finding iterates in K_k(MA,Mb). This wouldn't work for CG because MA >>>> isn't s.p.d. in general, even if M and A are. >>>> >>>> Standard CG preconditioning is often motivated as a clever way to do >>>> symmetric preconditioning, E^TAEy = E^Tb, x=Ey, without ever needing E >>>> explicitly, using only M=EE^T . y_k lives in K_k(E^TAE,E^Tb) and thus >>>> x_k again lives in K_k(MA;Mb). >>>> >>>> Thus, it's not clear that there is an candidate for a >>>> right-preconditioned CG variant, as what PETSc calls "left" >>>> preconditioning doesn't arise in the same way that it does for other >>>> Krylov methods, namely using the standard algorithm on MAx=Mb. For >>>> GMRES you would get a right-preconditioned variant by looking at the >>>> transformed system AMy=b, x = My. This means that y_k lives in >>>> K_k(AM,b), so x lives in K_k(MA,Mb), as before. For CG, AM wouldn't be >>>> spd in general so this approach wouldn't make sense. >>>> >>>> Another way to look at the difference in "left" preconditioning >>>> between GMRES and CG is that >>>> >>>> - introducing left preconditioning for GMRES alters both the Krylov >>>> subspaces *and* the optimality condition: you go from minimizing || b >>>> - Ax_k ||_2 over K_k(A;b) to minimizing || M (b-Ax_k) ||_2 over >>>> K_k(MA;Mb). >>>> >>>> - introducing "left" preconditioning for CG alters *only* the Krylov >>>> subspaces: you always minimize || x - x_k ||_A , but change the space >>>> from K_k(A;b) to K_k(MA;Mb). >>>> >>>> Thus, my impression is that it's misleading to call standard CG >>>> preconditioning "left" preconditioning in PETSc - someone might think >>>> of GMRES and naturally ask why there is no right preconditioning. >>>> >>>> One might define a new entry in PCSide to be used with CG and friends. >>>> I can't think of any slam dunk suggestions yet, but something in the >>>> genre of PC_INNERPRODUCT, PC_METRIC, PC_CG, or PC_IMPLICITSYMMETRIC, >>>> perhaps. >>>> >>>> >>>>> >>>>> Barry >>>>> >>>>>> >>>>>> Fande, >>>>>> >>>>>> On Wed, Mar 8, 2017 at 9:33 AM, Barry Smith wrote: >>>>>> >>>>>> Please tell us how you got this output. >>>>>> >>>>>> PETSc CG doesn't even implement right preconditioning. If you ask for it it should error out. CG supports no norm computation with left preconditioning. >>>>>> >>>>>> Barry >>>>>> >>>>>>> On Mar 8, 2017, at 10:26 AM, Kong, Fande wrote: >>>>>>> >>>>>>> Hi All, >>>>>>> >>>>>>> The NONE norm type is supported only when CG is used with a right preconditioner. Any reason for this? >>>>>>> >>>>>>> >>>>>>> >>>>>>> 0 Nonlinear |R| = 1.732051e+00 >>>>>>> 0 Linear |R| = 0.000000e+00 >>>>>>> 1 Linear |R| = 0.000000e+00 >>>>>>> 2 Linear |R| = 0.000000e+00 >>>>>>> 3 Linear |R| = 0.000000e+00 >>>>>>> 4 Linear |R| = 0.000000e+00 >>>>>>> 5 Linear |R| = 0.000000e+00 >>>>>>> 6 Linear |R| = 0.000000e+00 >>>>>>> 1 Nonlinear |R| = 1.769225e-08 >>>>>>> 0 Linear |R| = 0.000000e+00 >>>>>>> 1 Linear |R| = 0.000000e+00 >>>>>>> 2 Linear |R| = 0.000000e+00 >>>>>>> 3 Linear |R| = 0.000000e+00 >>>>>>> 4 Linear |R| = 0.000000e+00 >>>>>>> 5 Linear |R| = 0.000000e+00 >>>>>>> 6 Linear |R| = 0.000000e+00 >>>>>>> 7 Linear |R| = 0.000000e+00 >>>>>>> 8 Linear |R| = 0.000000e+00 >>>>>>> 9 Linear |R| = 0.000000e+00 >>>>>>> 10 Linear |R| = 0.000000e+00 >>>>>>> 2 Nonlinear |R| = 0.000000e+00 >>>>>>> SNES Object: 1 MPI processes >>>>>>> type: newtonls >>>>>>> maximum iterations=50, maximum function evaluations=10000 >>>>>>> tolerances: relative=1e-08, absolute=1e-50, solution=1e-50 >>>>>>> total number of linear solver iterations=18 >>>>>>> total number of function evaluations=23 >>>>>>> norm schedule ALWAYS >>>>>>> SNESLineSearch Object: 1 MPI processes >>>>>>> type: bt >>>>>>> interpolation: cubic >>>>>>> alpha=1.000000e-04 >>>>>>> maxstep=1.000000e+08, minlambda=1.000000e-12 >>>>>>> tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 >>>>>>> maximum iterations=40 >>>>>>> KSP Object: 1 MPI processes >>>>>>> type: cg >>>>>>> maximum iterations=10000, initial guess is zero >>>>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>>>>>> right preconditioning >>>>>>> using NONE norm type for convergence test >>>>>>> PC Object: 1 MPI processes >>>>>>> type: hypre >>>>>>> HYPRE BoomerAMG preconditioning >>>>>>> HYPRE BoomerAMG: Cycle type V >>>>>>> HYPRE BoomerAMG: Maximum number of levels 25 >>>>>>> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 >>>>>>> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0. >>>>>>> HYPRE BoomerAMG: Threshold for strong coupling 0.25 >>>>>>> HYPRE BoomerAMG: Interpolation truncation factor 0. >>>>>>> HYPRE BoomerAMG: Interpolation: max elements per row 0 >>>>>>> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 >>>>>>> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 >>>>>>> HYPRE BoomerAMG: Maximum row sums 0.9 >>>>>>> HYPRE BoomerAMG: Sweeps down 1 >>>>>>> HYPRE BoomerAMG: Sweeps up 1 >>>>>>> HYPRE BoomerAMG: Sweeps on coarse 1 >>>>>>> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi >>>>>>> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi >>>>>>> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination >>>>>>> HYPRE BoomerAMG: Relax weight (all) 1. >>>>>>> HYPRE BoomerAMG: Outer relax weight (all) 1. >>>>>>> HYPRE BoomerAMG: Using CF-relaxation >>>>>>> HYPRE BoomerAMG: Not using more complex smoothers. >>>>>>> HYPRE BoomerAMG: Measure type local >>>>>>> HYPRE BoomerAMG: Coarsen type Falgout >>>>>>> HYPRE BoomerAMG: Interpolation type classical >>>>>>> linear system matrix followed by preconditioner matrix: >>>>>>> Mat Object: 1 MPI processes >>>>>>> type: mffd >>>>>>> rows=9, cols=9 >>>>>>> Matrix-free approximation: >>>>>>> err=1.49012e-08 (relative error in function evaluation) >>>>>>> Using wp compute h routine >>>>>>> Does not compute normU >>>>>>> Mat Object: () 1 MPI processes >>>>>>> type: seqaij >>>>>>> rows=9, cols=9 >>>>>>> total: nonzeros=49, allocated nonzeros=49 >>>>>>> total number of mallocs used during MatSetValues calls =0 >>>>>>> not using I-node routines >>>>>>> >>>>>>> Fande, >>> >>> > From cpraveen at gmail.com Mon Mar 20 07:46:18 2017 From: cpraveen at gmail.com (Praveen C) Date: Mon, 20 Mar 2017 18:16:18 +0530 Subject: [petsc-users] Segmentation fault due to TSDestroy In-Reply-To: References: Message-ID: It turns out the problem was with this call TSMonitorSet(ts, Monitor, ctx, PETSC_NULL_OBJECT, ierr); CHKERRQ(ierr) The correct way is call TSMonitorSet(ts, Monitor, ctx, PETSC_NULL_FUNCTION, ierr); CHKERRQ(ierr) Thanks praveen On Sat, Mar 18, 2017 at 9:30 PM, Satish Balay wrote: > Perhaps there is some memory corruption - you can try runnng the code with > valgrind. > > Satish > > On Sat, 18 Mar 2017, Praveen C wrote: > > > Dear all > > > > I get a segmentation fault when I call TSDestroy. Without TSDestroy the > > code runs fine. I have included portion of my code below. > > > > subroutine runts(ctx) > > use userdata > > use comdata > > use mtsdata > > implicit none > > #include > > type(tsdata) :: ctx > > ! Local variables > > integer,parameter :: h = 100 ! File id for history file > > TS :: ts > > Vec :: u > > PetscErrorCode :: ierr > > external :: RHSFunction, Monitor > > > > call VecDuplicate(ctx%p%v_res, u, ierr); CHKERRQ(ierr) > > > > ! Copy initial condition into u > > call VecCopy(ctx%p%v_u, u, ierr); CHKERRQ(ierr) > > > > call TSCreate(PETSC_COMM_WORLD, ts, ierr); CHKERRQ(ierr) > > call TSSetProblemType(ts, TS_NONLINEAR, ierr); CHKERRQ(ierr) > > call TSSetRHSFunction(ts, PETSC_NULL_OBJECT, RHSFunction, ctx, ierr); > > CHKERRQ(ierr) > > call TSSetInitialTimeStep(ts, 0.0, dtg, ierr); CHKERRQ(ierr) > > call TSSetType(ts, TSRK, ierr); CHKERRQ(ierr); > > call TSSetDuration(ts, itmax, tfinal, ierr); CHKERRQ(ierr); > > call TSSetExactFinalTime(ts, TS_EXACTFINALTIME_MATCHSTEP, ierr); > > CHKERRQ(ierr); > > call TSMonitorSet(ts, Monitor, ctx, PETSC_NULL_OBJECT, ierr); > > CHKERRQ(ierr) > > call TSSetSolution(ts, u, ierr); CHKERRQ(ierr) > > call TSSetFromOptions(ts, ierr); CHKERRQ(ierr) > > call TSSetUp(ts, ierr); CHKERRQ(ierr) > > > > call TSSolve(ts, u, ierr); CHKERRQ(ierr) > > > > call VecCopy(u, ctx%p%v_u, ierr); CHKERRQ(ierr) > > call VecDestroy(u, ierr); CHKERRQ(ierr) > > call TSDestroy(ts, ierr); CHKERRQ(ierr) > > > > end subroutine runts > > > > Thanks > > praveen > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhaowenbo.npic at gmail.com Mon Mar 20 09:39:08 2017 From: zhaowenbo.npic at gmail.com (Wenbo Zhao) Date: Mon, 20 Mar 2017 22:39:08 +0800 Subject: [petsc-users] Question about DMDA BOUNDARY_CONDITION set Message-ID: Hi all. I have a mesh is like below 1 2 3 4 5 6 7 8 9 I use DACreate2d to create mesh partition. In my case, I have an rotation boundary condition. The whole mesh is like below 9 8 7 3 6 9 6 5 4 2 5 8 3 2 1 1 4 7 7 4 1 1 2 3 8 5 2 4 5 6 9 6 3 7 8 9 It means that cell 2 near the top boundary are connected with cell 4 near the left boundary, cell 3 with cell 7. How can I set boundary condition or set my matrix? I am looking forward for your help! BEST, Wenbo -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Mar 20 10:14:38 2017 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 20 Mar 2017 15:14:38 +0000 Subject: [petsc-users] Question about DMDA BOUNDARY_CONDITION set In-Reply-To: References: Message-ID: On Mon, Mar 20, 2017 at 2:39 PM, Wenbo Zhao wrote: > Hi all. > > I have a mesh is like below > > 1 2 3 > 4 5 6 > 7 8 9 > > I use DACreate2d to create mesh partition. > > In my case, I have an rotation boundary condition. The whole mesh is like > below > > 9 8 7 3 6 9 > 6 5 4 2 5 8 > 3 2 1 1 4 7 > > 7 4 1 1 2 3 > 8 5 2 4 5 6 > 9 6 3 7 8 9 > > It means that cell 2 near the top boundary are connected with cell 4 near > the left boundary, cell 3 with cell 7. > It looks like the above is the best way to do this right now, namely replicate your grid four times, and use periodic boundary conditions. Otherwise, you would be stuck writing the routine for local-to-global mappings enforcing these conditions by hand. Thanks, Matt > How can I set boundary condition or set my matrix? > > I am looking forward for your help! > > BEST, > > Wenbo > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedenhe at gmail.com Mon Mar 20 14:01:30 2017 From: friedenhe at gmail.com (Ping He) Date: Mon, 20 Mar 2017 15:01:30 -0400 Subject: [petsc-users] How to use class function in SNESSetFunction In-Reply-To: References: Message-ID: <8efff42c-6f61-f7f5-f7d9-b93cd3e40169@gmail.com> Dear Matt, I know it is an old thread and your solution works well, but I still want to ask if there are any alternative instead of setting the member function "static". The issue is that I need to use multiple member functions (20+) in SNESSetFunction and numerous class variables. So changing all of them to static will need quite a lot of code/structure change. Thanks very much in advance. Regards, Ping From knepley at gmail.com Mon Mar 20 14:29:04 2017 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 20 Mar 2017 19:29:04 +0000 Subject: [petsc-users] How to use class function in SNESSetFunction In-Reply-To: <8efff42c-6f61-f7f5-f7d9-b93cd3e40169@gmail.com> References: <8efff42c-6f61-f7f5-f7d9-b93cd3e40169@gmail.com> Message-ID: On Mon, Mar 20, 2017 at 7:01 PM, Ping He wrote: > Dear Matt, > > I know it is an old thread and your solution works well, but I still want > to ask if there are any alternative instead of setting the member function > "static". The issue is that I need to use multiple member functions (20+) > in SNESSetFunction and numerous class variables. So changing all of them to > static will need quite a lot of code/structure change. Thanks very much in > advance. > 1) Make a static member function that you pass in. That is the only way to match the signature. 2) Pass your object as the user context 3) Inside your static function, use your object from the context argument to call your member function with the other arguments. Thanks, Matt > Regards, > Ping > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdaralagodudatta at wpi.edu Mon Mar 20 18:07:44 2017 From: sdaralagodudatta at wpi.edu (Daralagodu Dattatreya Jois, Sathwik Bharadw) Date: Mon, 20 Mar 2017 23:07:44 +0000 Subject: [petsc-users] Values of a column in a parallel matrix Message-ID: Hey all, I am using AIJ matrix to solve Laplace problem in finite element framework. To apply Neumann boundary conditions I need to obtain values of first and last few columns and subtract it with the corresponding right hand side vector. I understand that MatGetColumnVector and MatGetValues are not collective. Is there any other alternative petsc calls to achieve this in parallel? Thanks, Sathwik -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Mar 20 21:40:57 2017 From: jed at jedbrown.org (Jed Brown) Date: Mon, 20 Mar 2017 20:40:57 -0600 Subject: [petsc-users] Values of a column in a parallel matrix In-Reply-To: References: Message-ID: <878tnz73nq.fsf@jedbrown.org> "Daralagodu Dattatreya Jois, Sathwik Bharadw" writes: > I am using AIJ matrix to solve Laplace problem in finite element > framework. To apply Neumann boundary conditions I need to obtain > values of first and last few columns and subtract it with the > corresponding right hand side vector. I understand that > MatGetColumnVector and MatGetValues are not collective. Is there any > other alternative petsc calls to achieve this in parallel? Inhomogeneous Neumann conditions are normally applied by integration over the boundary. But if you need a linear combination of some columns of the matrix, that's a MatMult. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From bsmith at mcs.anl.gov Mon Mar 20 22:33:20 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 20 Mar 2017 22:33:20 -0500 Subject: [petsc-users] Values of a column in a parallel matrix In-Reply-To: References: Message-ID: > On Mar 20, 2017, at 6:07 PM, Daralagodu Dattatreya Jois, Sathwik Bharadw wrote: > > Hey all, > > I am using AIJ matrix to solve Laplace problem in finite element framework. To apply Neumann boundary conditions I need to obtain values of first and last few columns and subtract it with the corresponding right hand side vector. I understand that MatGetColumnVector andMatGetValues are not collective. Is there any other alternative petsc calls to achieve this in parallel? Hmm, I think you mean non-zero Dirichlet boundary conditions. In that case the recommended approach is calling MatZeroRows() or if you want to preserve symmetry MatZeroRowsColumns(). There are also MatZeroRowsLocal() and MatZeroRowsColumnsLocal() and a few other variants. Barry > > Thanks, > Sathwik From fdkong.jd at gmail.com Mon Mar 20 22:40:54 2017 From: fdkong.jd at gmail.com (Fande Kong) Date: Mon, 20 Mar 2017 21:40:54 -0600 Subject: [petsc-users] Values of a column in a parallel matrix In-Reply-To: References: Message-ID: On Mon, Mar 20, 2017 at 9:33 PM, Barry Smith wrote: > > > On Mar 20, 2017, at 6:07 PM, Daralagodu Dattatreya Jois, Sathwik Bharadw > wrote: > > > > Hey all, > > > > I am using AIJ matrix to solve Laplace problem in finite element > framework. To apply Neumann boundary conditions I need to obtain values of > first and last few columns and subtract it with the corresponding right > hand side vector. I understand that MatGetColumnVector andMatGetValues are > not collective. Is there any other alternative petsc calls to achieve this > in parallel? > > Hmm, I think you mean non-zero Dirichlet boundary conditions. In that > case the recommended approach is calling MatZeroRows() or if you want to > preserve symmetry MatZeroRowsColumns(). There are also MatZeroRowsLocal() > and MatZeroRowsColumnsLocal() and a few other variants. > > Barry > If we zero out rows AND columns using MatZeroRowsColumns to preserve the symmetry, how we connect the non-zero boundary values with the interior solution? Fande, > > > > > > Thanks, > > Sathwik > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdaralagodudatta at wpi.edu Mon Mar 20 22:48:14 2017 From: sdaralagodudatta at wpi.edu (Daralagodu Dattatreya Jois, Sathwik Bharadw) Date: Tue, 21 Mar 2017 03:48:14 +0000 Subject: [petsc-users] Values of a column in a parallel matrix In-Reply-To: References: , Message-ID: Hey Barry, I am already using MatZeroRows. I was actually stuck precisely while applying non zero Dirchlet boundary conditions. But to account for non zero value traditionally we subtract the corresponding column with the right hand side vector. If we zero the rows and columns we can only have value zero for solutions at the boundary. Get Outlook for Android ________________________________ From: Barry Smith Sent: Monday, March 20, 2017 11:33:20 PM To: Daralagodu Dattatreya Jois, Sathwik Bharadw Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] Values of a column in a parallel matrix > On Mar 20, 2017, at 6:07 PM, Daralagodu Dattatreya Jois, Sathwik Bharadw wrote: > > Hey all, > > I am using AIJ matrix to solve Laplace problem in finite element framework. To apply Neumann boundary conditions I need to obtain values of first and last few columns and subtract it with the corresponding right hand side vector. I understand that MatGetColumnVector andMatGetValues are not collective. Is there any other alternative petsc calls to achieve this in parallel? Hmm, I think you mean non-zero Dirichlet boundary conditions. In that case the recommended approach is calling MatZeroRows() or if you want to preserve symmetry MatZeroRowsColumns(). There are also MatZeroRowsLocal() and MatZeroRowsColumnsLocal() and a few other variants. Barry > > Thanks, > Sathwik -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Mon Mar 20 23:04:54 2017 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Mon, 20 Mar 2017 21:04:54 -0700 Subject: [petsc-users] Values of a column in a parallel matrix In-Reply-To: References: Message-ID: <35edc043-5904-b524-3e90-b2ce08a760df@berkeley.edu> If you zero the row and column as suggested, you can get what you want by building the RHS as you construct your matrix (i.e. do in while assembling your matrix and RHS). On 3/20/17 8:48 PM, Daralagodu Dattatreya Jois, Sathwik Bharadw wrote: > > Hey Barry, > I am already using MatZeroRows. I was actually stuck precisely while > applying non zero Dirchlet boundary conditions. But to account for non > zero value traditionally we subtract the corresponding column with the > right hand side vector. If we zero the rows and columns we can only > have value zero for solutions at the boundary. > > Get Outlook for Android > > ------------------------------------------------------------------------ > *From:* Barry Smith > *Sent:* Monday, March 20, 2017 11:33:20 PM > *To:* Daralagodu Dattatreya Jois, Sathwik Bharadw > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] Values of a column in a parallel matrix > > > On Mar 20, 2017, at 6:07 PM, Daralagodu Dattatreya Jois, Sathwik > Bharadw wrote: > > > > Hey all, > > > > I am using AIJ matrix to solve Laplace problem in finite element > framework. To apply Neumann boundary conditions I need to obtain > values of first and last few columns and subtract it with the > corresponding right hand side vector. I understand that > MatGetColumnVector andMatGetValues are not collective. Is there any > other alternative petsc calls to achieve this in parallel? > > Hmm, I think you mean non-zero Dirichlet boundary conditions. In > that case the recommended approach is calling MatZeroRows() or if you > want to preserve symmetry MatZeroRowsColumns(). There are also > MatZeroRowsLocal() and MatZeroRowsColumnsLocal() and a few other variants. > > Barry > > > > > > Thanks, > > Sathwik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Mar 20 23:05:20 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 20 Mar 2017 23:05:20 -0500 Subject: [petsc-users] Values of a column in a parallel matrix In-Reply-To: References: Message-ID: > On Mar 20, 2017, at 10:48 PM, Daralagodu Dattatreya Jois, Sathwik Bharadw wrote: > > Hey Barry, > I am already using MatZeroRows. I was actually stuck precisely while applying non zero Dirchlet boundary conditions. But to account for non zero value traditionally we subtract the corresponding column with the right hand side vector. If we zero the rows and columns we can only have value zero for solutions at the boundary. Not sure what you mean. Take a look at, for example MatZeroRowsColumns_SeqAIJ() src/mat/impls/aij/seq/aij.c It first updates the right hand side (by subtracting the corresponding column times the correct value in the solution (i.e. the Dirichlet value provided in the x input vector )) and then it zeros the entry in the matrix. Barry Perhaps you are concerned about multiple sequential linear solves with the same matrix and different right hand sides? In that case you are correct, since you have removed those columns entries you cannot call it again for the next right hand side. In that case you need to use MatZeroRows(). > > Get Outlook for Android > > From: Barry Smith > Sent: Monday, March 20, 2017 11:33:20 PM > To: Daralagodu Dattatreya Jois, Sathwik Bharadw > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] Values of a column in a parallel matrix > > > > On Mar 20, 2017, at 6:07 PM, Daralagodu Dattatreya Jois, Sathwik Bharadw wrote: > > > > Hey all, > > > > I am using AIJ matrix to solve Laplace problem in finite element framework. To apply Neumann boundary conditions I need to obtain values of first and last few columns and subtract it with the corresponding right hand side vector. I understand that MatGetColumnVector andMatGetValues are not collective. Is there any other alternative petsc calls to achieve this in parallel? > > Hmm, I think you mean non-zero Dirichlet boundary conditions. In that case the recommended approach is calling MatZeroRows() or if you want to preserve symmetry MatZeroRowsColumns(). There are also MatZeroRowsLocal() and MatZeroRowsColumnsLocal() and a few other variants. > > Barry > > > > > > Thanks, > > Sathwik From bsmith at mcs.anl.gov Mon Mar 20 23:13:09 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 20 Mar 2017 23:13:09 -0500 Subject: [petsc-users] Values of a column in a parallel matrix In-Reply-To: <35edc043-5904-b524-3e90-b2ce08a760df@berkeley.edu> References: <35edc043-5904-b524-3e90-b2ce08a760df@berkeley.edu> Message-ID: <1366540D-FA88-4CAD-AFDF-22984D59CF02@mcs.anl.gov> > On Mar 20, 2017, at 11:04 PM, Sanjay Govindjee wrote: > > If you zero the row and column as suggested, you can get what you want by building the > RHS as you construct your matrix (i.e. do in while assembling your matrix and RHS). Yes, and often this is good way to proceed. But I don't think you HAVE to do it this way, I think MatZeroRowsColums() is an alternative. Barry > > On 3/20/17 8:48 PM, Daralagodu Dattatreya Jois, Sathwik Bharadw wrote: >> Hey Barry, >> I am already using MatZeroRows. I was actually stuck precisely while applying non zero Dirchlet boundary conditions. But to account for non zero value traditionally we subtract the corresponding column with the right hand side vector. If we zero the rows and columns we can only have value zero for solutions at the boundary. >> >> Get Outlook for Android >> >> From: Barry Smith >> Sent: Monday, March 20, 2017 11:33:20 PM >> To: Daralagodu Dattatreya Jois, Sathwik Bharadw >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] Values of a column in a parallel matrix >> >> >> > On Mar 20, 2017, at 6:07 PM, Daralagodu Dattatreya Jois, Sathwik Bharadw wrote: >> > >> > Hey all, >> > >> > I am using AIJ matrix to solve Laplace problem in finite element framework. To apply Neumann boundary conditions I need to obtain values of first and last few columns and subtract it with the corresponding right hand side vector. I understand that MatGetColumnVector andMatGetValues are not collective. Is there any other alternative petsc calls to achieve this in parallel? >> >> Hmm, I think you mean non-zero Dirichlet boundary conditions. In that case the recommended approach is calling MatZeroRows() or if you want to preserve symmetry MatZeroRowsColumns(). There are also MatZeroRowsLocal() and MatZeroRowsColumnsLocal() and a few other variants. >> >> Barry >> >> >> > >> > Thanks, >> > Sathwik >> > > From cpraveen at gmail.com Tue Mar 21 01:22:53 2017 From: cpraveen at gmail.com (Praveen C) Date: Tue, 21 Mar 2017 11:52:53 +0530 Subject: [petsc-users] TSAdaptSetType in fortran Message-ID: Dear all In my fortran code, I want to set to TSADAPTNONE by default. But it seems that TSAdaptSetType does not exist in fortran. Is there a solution for this ? Thanks praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 21 03:11:34 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 21 Mar 2017 03:11:34 -0500 Subject: [petsc-users] TSAdaptSetType in fortran In-Reply-To: References: Message-ID: Sorry about that; since it involves character strings so it is something we currently need to do manually. If you are 1) using petsc 3.7 from the tar ball I've attached a patch file, apply it in the petsc directory with patch -p1 < add-fortran-tsadaptsettype 2) if you are using petsc 3.7 from the git repository (i.e. the maint branch) do "git fetch; git checkout barry/add-fortran-tsadaptsettype" 3) if you are using petsc master from git use "git fetch; git checkout barry/add-fortran-tsadaptsettype; git checkout master; git merge barry/add-fortran-tsadaptsettype" In all cases after this you need to rerun ./configure and make Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: add-fortran-tsadaptsettype.patch Type: application/octet-stream Size: 1293 bytes Desc: not available URL: -------------- next part -------------- > On Mar 21, 2017, at 1:22 AM, Praveen C wrote: > > Dear all > > In my fortran code, I want to set to TSADAPTNONE by default. But it seems that TSAdaptSetType does not exist in fortran. Is there a solution for this ? > > Thanks > praveen From cpraveen at gmail.com Tue Mar 21 03:39:45 2017 From: cpraveen at gmail.com (Praveen C) Date: Tue, 21 Mar 2017 14:09:45 +0530 Subject: [petsc-users] TSAdaptSetType in fortran In-Reply-To: References: Message-ID: Thank you. Will this fix be added to future versions or I have to apply this manually ? Best praveen On Tue, Mar 21, 2017 at 1:41 PM, Barry Smith wrote: > > Sorry about that; since it involves character strings so it is something > we currently need to do manually. > > If you are > > 1) using petsc 3.7 from the tar ball I've attached a patch file, apply it > in the petsc directory with patch -p1 < add-fortran-tsadaptsettype > > 2) if you are using petsc 3.7 from the git repository (i.e. the maint > branch) do "git fetch; git checkout barry/add-fortran-tsadaptsettype" > > 3) if you are using petsc master from git use "git fetch; git checkout > barry/add-fortran-tsadaptsettype; git checkout master; git merge > barry/add-fortran-tsadaptsettype" > > In all cases after this you need to rerun ./configure and make > > Barry > > > > > On Mar 21, 2017, at 1:22 AM, Praveen C wrote: > > > > Dear all > > > > In my fortran code, I want to set to TSADAPTNONE by default. But it > seems that TSAdaptSetType does not exist in fortran. Is there a solution > for this ? > > > > Thanks > > praveen > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 21 03:45:54 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 21 Mar 2017 03:45:54 -0500 Subject: [petsc-users] TSAdaptSetType in fortran In-Reply-To: References: Message-ID: > On Mar 21, 2017, at 3:39 AM, Praveen C wrote: > > Thank you. > > Will this fix be added to future versions or I have to apply this manually ? I will put it in maint and master as soon as I know it isn't buggy. So within a day or two. Barry > > Best > praveen > > On Tue, Mar 21, 2017 at 1:41 PM, Barry Smith wrote: > > Sorry about that; since it involves character strings so it is something we currently need to do manually. > > If you are > > 1) using petsc 3.7 from the tar ball I've attached a patch file, apply it in the petsc directory with patch -p1 < add-fortran-tsadaptsettype > > 2) if you are using petsc 3.7 from the git repository (i.e. the maint branch) do "git fetch; git checkout barry/add-fortran-tsadaptsettype" > > 3) if you are using petsc master from git use "git fetch; git checkout barry/add-fortran-tsadaptsettype; git checkout master; git merge barry/add-fortran-tsadaptsettype" > > In all cases after this you need to rerun ./configure and make > > Barry > > > > > On Mar 21, 2017, at 1:22 AM, Praveen C wrote: > > > > Dear all > > > > In my fortran code, I want to set to TSADAPTNONE by default. But it seems that TSAdaptSetType does not exist in fortran. Is there a solution for this ? > > > > Thanks > > praveen > > > From natacha.bereux at gmail.com Tue Mar 21 07:58:24 2017 From: natacha.bereux at gmail.com (Natacha BEREUX) Date: Tue, 21 Mar 2017 13:58:24 +0100 Subject: [petsc-users] Configure nested PCFIELDSPLIT with general index sets Message-ID: Dear PETSc user's, I am trying to solve a poroelasticity problem with an additional temperature equation. The problem is a 3 fields problem involving a displacement field (u), a pressure field (p) and a temperature field (t). I have seen similar examples in http://www.mcs.anl.gov/papers/P2017-0112.pdf or in Matt's talk http://www.caam.rice.edu/~mk51/presentations/SIAMCSE13.pdf I would like to reproduce them, but I am encountering troubles whem trying to do so. Here is how I proceed: I have a monolithic matrix A stemming . I build 3 index sets for u,p, and t in A. Then I set up the KSP context : call KSPCreate(PETSC_COMM_WORLD,ksp,ierr) call KSPSetOperators(ksp,A,A,ierr) call KSPGetPC(ksp, pc, ierr) call PCSetType(pc, PCFIELDSPLIT, ierr) call PCFieldSplitSetIS(pc,'u',is_u, ierr) call PCFieldSplitSetIS(pc,'p',is_p, ierr) call PCFieldSplitSetIS(pc,'t',is_t, ierr) call PCSetFromOptions(pc,ierr) call KSPSetFromOptions(ksp,ierr) call KSPSolve(ksp,b,x,ierr) I run the code with the following options -ksp_view -log_view -ksp_rtol 1.0e-5 -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type additive -pc_fieldsplit_0_fields 0,1 -pc_fieldsplit_1_fields 2 -pc_fieldsplit_0_pc_type lu -pc_fieldsplit_0_ksp_type preonly -pc_fieldsplit_1_pc_type lu -pc_fieldsplit_1_ksp_type preonly I would like to group u and p fields in a "0" block and then temperature remains in "1" block. (I start with direct solves in each blocks to check the block decomposition but I intend to do use iterative methods later, and more precisely to use a Schur fieldsplit preconditionner for the "0" block) The output follows : KSP Object: 1 MPI processes type: fgmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: fieldsplit FieldSplit with ADDITIVE composition: total splits = 3 Solver info for each split is in the following KSP objects: Split number 0 Defined by IS KSP Object: (fieldsplit_u_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (fieldsplit_u_) 1 MPI processes type: ilu PC has not been set up so information may be incomplete ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 matrix ordering: natural linear system matrix = precond matrix: Mat Object: (fieldsplit_u_) 1 MPI processes type: seqaij rows=60, cols=60 total: nonzeros=3600, allocated nonzeros=3600 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 12 nodes, limit used is 5 Split number 1 Defined by IS KSP Object: (fieldsplit_p_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (fieldsplit_p_) 1 MPI processes type: ilu PC has not been set up so information may be incomplete ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 matrix ordering: natural linear system matrix = precond matrix: Mat Object: (fieldsplit_p_) 1 MPI processes type: seqaij rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 2 nodes, limit used is 5 Split number 2 Defined by IS KSP Object: (fieldsplit_t_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using DEFAULT norm type for convergence test PC Object: (fieldsplit_t_) 1 MPI processes type: ilu PC has not been set up so information may be incomplete ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 matrix ordering: natural linear system matrix = precond matrix: Mat Object: (fieldsplit_t_) 1 MPI processes type: seqaij rows=8, cols=8 total: nonzeros=64, allocated nonzeros=64 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 2 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=76, cols=76 total: nonzeros=5776, allocated nonzeros=5776 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 16 nodes, limit used is 5 The preconditionner has 3 splits, whereas I would like to group (u,p) together and see 2 splits I suspect that -pc_fieldsplit_0_fields 0,1 -pc_fieldsplit_1_fields 2 are not the appropriate options. Am I correct ? What is the right way for grouping two fields defined by index sets ? Any help would be greatly appreciated ! Natacha -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Mar 21 08:24:07 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 21 Mar 2017 13:24:07 +0000 Subject: [petsc-users] Configure nested PCFIELDSPLIT with general index sets In-Reply-To: References: Message-ID: On Tue, Mar 21, 2017 at 12:58 PM, Natacha BEREUX wrote: > Dear PETSc user's, > I am trying to solve a poroelasticity problem with an additional > temperature equation. The problem is a 3 fields problem involving a > displacement field (u), a pressure field (p) and a temperature field (t). > I have seen similar examples in > http://www.mcs.anl.gov/papers/P2017-0112.pdf > or in Matt's talk http://www.caam.rice.edu/~mk51/presentations/SIAMCSE13. > pdf > I would like to reproduce them, but I am encountering troubles whem > trying to do so. > Yes, my talk is not clear on this point. The option you want, -pc_fieldsplit__fields, only works if you use a DM to describe the splits. Here is the code for DMDA, which assumes a regular division: https://bitbucket.org/petsc/petsc/src/d4e0040555dc16f2fb1f7e2e0304e363dcc11328/src/ksp/pc/impls/fieldsplit/fieldsplit.c?at=master&fileviewer=file-view-default#fieldsplit.c-287 and for DMPlex and DMShell, https://bitbucket.org/petsc/petsc/src/d4e0040555dc16f2fb1f7e2e0304e363dcc11328/src/ksp/pc/impls/fieldsplit/fieldsplit.c?at=master&fileviewer=file-view-default#fieldsplit.c-330 We could, of course, do something like this by just merging the ISes, but then we lose the ability to do it recursively since we would not know how to split them again. So we would have to carry along something that tells us that, and this is exactly the job that the DM is doing. I think the remedy is as easy as specifying a DMShell that has a PetscSection (DMSetDefaultSection) with your ordering, and I think this is how Firedrake (http://www.firedrakeproject.org/) does it. However, I usually use a DMPlex which knows about my mesh, so I am not sure if this strategy has any holes. The PetscSection is just a model of your ordering. You have a domain of "points", which are usually pieces of the mesh, which map to a # of dofs. For example, linear elements associate 1 dof to every vertex. The Section allows dofs for several fields to be added. This (I think) is described in the manual. Does this make sense? Thanks, Matt Here is how I proceed: > > I have a monolithic matrix A stemming . > I build 3 index sets for u,p, and t in A. > Then I set up the KSP context : > > call KSPCreate(PETSC_COMM_WORLD,ksp,ierr) > call KSPSetOperators(ksp,A,A,ierr) > call KSPGetPC(ksp, pc, ierr) > call PCSetType(pc, PCFIELDSPLIT, ierr) > call PCFieldSplitSetIS(pc,'u',is_u, ierr) > call PCFieldSplitSetIS(pc,'p',is_p, ierr) > call PCFieldSplitSetIS(pc,'t',is_t, ierr) > call PCSetFromOptions(pc,ierr) > call KSPSetFromOptions(ksp,ierr) > call KSPSolve(ksp,b,x,ierr) > > I run the code with the following options > -ksp_view > -log_view > -ksp_rtol 1.0e-5 > -ksp_type fgmres > -pc_type fieldsplit > -pc_fieldsplit_type additive > -pc_fieldsplit_0_fields 0,1 > -pc_fieldsplit_1_fields 2 > -pc_fieldsplit_0_pc_type lu > -pc_fieldsplit_0_ksp_type preonly > -pc_fieldsplit_1_pc_type lu > -pc_fieldsplit_1_ksp_type preonly > > I would like to group u and p fields in a "0" block and then temperature > remains in "1" block. > (I start with direct solves in each blocks to check the block > decomposition but I intend to do use iterative methods later, and more > precisely to use a Schur fieldsplit preconditionner for the "0" block) > The output follows : > > KSP Object: 1 MPI processes > type: fgmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > right preconditioning > using UNPRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: fieldsplit > FieldSplit with ADDITIVE composition: total splits = 3 > Solver info for each split is in the following KSP objects: > Split number 0 Defined by IS > KSP Object: (fieldsplit_u_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using DEFAULT norm type for convergence test > PC Object: (fieldsplit_u_) 1 MPI processes > type: ilu > PC has not been set up so information may be incomplete > ILU: out-of-place factorization > 0 levels of fill > tolerance for zero pivot 2.22045e-14 > matrix ordering: natural > linear system matrix = precond matrix: > Mat Object: (fieldsplit_u_) 1 MPI processes > type: seqaij > rows=60, cols=60 > total: nonzeros=3600, allocated nonzeros=3600 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 12 nodes, limit used is 5 > Split number 1 Defined by IS > KSP Object: (fieldsplit_p_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using DEFAULT norm type for convergence test > PC Object: (fieldsplit_p_) 1 MPI processes > type: ilu > PC has not been set up so information may be incomplete > ILU: out-of-place factorization > 0 levels of fill > tolerance for zero pivot 2.22045e-14 > matrix ordering: natural > linear system matrix = precond matrix: > Mat Object: (fieldsplit_p_) 1 MPI processes > type: seqaij > rows=8, cols=8 > total: nonzeros=64, allocated nonzeros=64 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 2 nodes, limit used is 5 > Split number 2 Defined by IS > KSP Object: (fieldsplit_t_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using DEFAULT norm type for convergence test > PC Object: (fieldsplit_t_) 1 MPI processes > type: ilu > PC has not been set up so information may be incomplete > ILU: out-of-place factorization > 0 levels of fill > tolerance for zero pivot 2.22045e-14 > matrix ordering: natural > linear system matrix = precond matrix: > Mat Object: (fieldsplit_t_) 1 MPI processes > type: seqaij > rows=8, cols=8 > total: nonzeros=64, allocated nonzeros=64 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 2 nodes, limit used is 5 > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=76, cols=76 > total: nonzeros=5776, allocated nonzeros=5776 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 16 nodes, limit used is 5 > > The preconditionner has 3 splits, whereas I would like to group (u,p) > together and see 2 splits > I suspect that > -pc_fieldsplit_0_fields 0,1 > -pc_fieldsplit_1_fields 2 > are not the appropriate options. > Am I correct ? > What is the right way for grouping two fields defined by index sets ? > > Any help would be greatly appreciated ! > > Natacha > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From lawrence.mitchell at imperial.ac.uk Tue Mar 21 08:27:50 2017 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Tue, 21 Mar 2017 13:27:50 +0000 Subject: [petsc-users] Configure nested PCFIELDSPLIT with general index sets In-Reply-To: References: Message-ID: <6496846F-19F8-4494-87E1-DDC390513370@imperial.ac.uk> > On 21 Mar 2017, at 13:24, Matthew Knepley wrote: > > I think the remedy is as easy as specifying a DMShell that has a PetscSection (DMSetDefaultSection) with your ordering, and > I think this is how Firedrake (http://www.firedrakeproject.org/) does it. We actually don't use a section, but we do provide DMCreateFieldDecomposition_Shell. If you have a section that describes all the fields, then I think if the DMShell knows about it, you effectively get the same behaviour as DMPlex (which does the decomposition in the same manner?). > However, I usually use a DMPlex which knows about my > mesh, so I am not sure if this strategy has any holes. I haven't noticed anything yet. Lawrence From natacha.bereux at gmail.com Tue Mar 21 08:44:18 2017 From: natacha.bereux at gmail.com (Natacha BEREUX) Date: Tue, 21 Mar 2017 14:44:18 +0100 Subject: [petsc-users] Configure nested PCFIELDSPLIT with general index sets In-Reply-To: <6496846F-19F8-4494-87E1-DDC390513370@imperial.ac.uk> References: <6496846F-19F8-4494-87E1-DDC390513370@imperial.ac.uk> Message-ID: Thanks for your quick answers. To be honest, I am not familiar at all with DMShells and DMPlexes. But since it is what I need, I am going to try it. Thanks again for your advices, Natacha On Tue, Mar 21, 2017 at 2:27 PM, Lawrence Mitchell < lawrence.mitchell at imperial.ac.uk> wrote: > > > On 21 Mar 2017, at 13:24, Matthew Knepley wrote: > > > > I think the remedy is as easy as specifying a DMShell that has a > PetscSection (DMSetDefaultSection) with your ordering, and > > I think this is how Firedrake (http://www.firedrakeproject.org/) does > it. > > We actually don't use a section, but we do provide > DMCreateFieldDecomposition_Shell. > > If you have a section that describes all the fields, then I think if the > DMShell knows about it, you effectively get the same behaviour as DMPlex > (which does the decomposition in the same manner?). > > > However, I usually use a DMPlex which knows about my > > mesh, so I am not sure if this strategy has any holes. > > I haven't noticed anything yet. > > Lawrence -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhaowenbo.npic at gmail.com Tue Mar 21 08:46:07 2017 From: zhaowenbo.npic at gmail.com (Wenbo Zhao) Date: Tue, 21 Mar 2017 21:46:07 +0800 Subject: [petsc-users] Question about DMDA BOUNDARY_CONDITION set Message-ID: Matt, Thanks. I want to solve neutron diffusion equations using finite difference method and PETSc. This rotation boundary condition is very common in my cases. Though the mesh consists of ~ 10 Miliion structured hexahedron cells, the mesh is simple and could be discribed by three vectors about x, y and z axis. It is appropriate for DMDA except boundary condition. I wanted to make mesh partition like DMDA by hand. Then I need to create matrix and vector and assemble matrix, and et al. I thought it was an easy work. As you say, it's not. As a newer, I can use DACreate2d to begin. It's OK. But finally, it does need this optimization. Though I read the manual about the vector and matrix, I am not clear about the basic idea behind the code. How can I create a matrix and vector as my mesh partition and create the map between the nature ordering and the PETSc ordering in global vector? How does vector communicate in the operation of matrix multi vector? Does it achieved automatically? BETS, Wenbo -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Mar 21 09:31:42 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 21 Mar 2017 14:31:42 +0000 Subject: [petsc-users] Question about DMDA BOUNDARY_CONDITION set In-Reply-To: References: Message-ID: On Tue, Mar 21, 2017 at 1:46 PM, Wenbo Zhao wrote: > Matt, > > Thanks. > > I want to solve neutron diffusion equations using finite difference method > and PETSc. > This rotation boundary condition is very common in my cases. > Though the mesh consists of ~ 10 Miliion structured hexahedron cells, the > mesh is simple and could be discribed by three vectors about x, y and z > axis. > It is appropriate for DMDA except boundary condition. > > I wanted to make mesh partition like DMDA by hand. Then I need to create > matrix and vector and assemble matrix, and et al. I thought it was an easy > work. > As you say, it's not. > > As a newer, I can use DACreate2d to begin. It's OK. > But finally, it does need this optimization. > > Though I read the manual about the vector and matrix, I am not clear about > the basic idea behind the code. > How can I create a matrix and vector as my mesh partition > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMCreateGlobalVector.html > and create the map between the nature ordering and the PETSc ordering in > global vector? > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMDAGlobalToNaturalBegin.html#DMDAGlobalToNaturalBegin > How does vector communicate in the operation of matrix multi vector? > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMult.html > Does it achieved automatically? > Yes. Thanks, Matt > BETS, > > Wenbo > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 21 14:45:32 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 21 Mar 2017 14:45:32 -0500 Subject: [petsc-users] [petsc-maint] Question about DMDA BOUNDARY_CONDITION set In-Reply-To: References: Message-ID: <2FE6B746-3744-4237-A834-A05751F9CAFA@mcs.anl.gov> I can cook up a simple example that communicates the needed ghost values for your boundary conditions. It may take a day or two. Are you using a five point stencil or a nine point stencil (the nine point stencil seems "weird" for that very corner point)? Barry > On Mar 21, 2017, at 8:46 AM, Wenbo Zhao wrote: > > Matt, > > Thanks. > > I want to solve neutron diffusion equations using finite difference method and PETSc. > This rotation boundary condition is very common in my cases. > Though the mesh consists of ~ 10 Miliion structured hexahedron cells, the mesh is simple and could be discribed by three vectors about x, y and z axis. > It is appropriate for DMDA except boundary condition. > > I wanted to make mesh partition like DMDA by hand. Then I need to create matrix and vector and assemble matrix, and et al. I thought it was an easy work. > As you say, it's not. > > As a newer, I can use DACreate2d to begin. It's OK. > But finally, it does need this optimization. > > Though I read the manual about the vector and matrix, I am not clear about the basic idea behind the code. > How can I create a matrix and vector as my mesh partition and create the map between the nature ordering and the PETSc ordering in global vector? > How does vector communicate in the operation of matrix multi vector? Does it achieved automatically? > > BETS, > > Wenbo > > > > > From bikash at umich.edu Tue Mar 21 17:23:19 2017 From: bikash at umich.edu (Bikash Kanungo) Date: Tue, 21 Mar 2017 18:23:19 -0400 Subject: [petsc-users] jmp_buf error while linking petsc Message-ID: Hi, I have recently installed petsc-3.7.5. However, while linking petsc with my code, I get the following compilation error: /home/bikashk/softwares/femdft_softwares/petsc/opt_3.7.5_mvapich2_64bit_complex/include/petscdraw.h(335): error: identifier "jmp_buf" is undefined PETSC_EXTERN jmp_buf PetscXIOErrorHandlerJumpBuf; I used the following configure options while compiling petsc: ./configure --prefix=/home/bikashk/softwares/femdft_softwares/petsc/opt_3.7.5_mvapich2_64bit_complex --with-debugging=no --with-scalar-type=complex CFLAGS="-O3 -xcore-avx2" CXXFLAGS="-O3 -xcore-avx2" --with-mpi-dir=/opt/mvapich2/intel/ib/ -with-blas-lib="-Wl,--start-group /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_intel_lp64.a /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_sequential.a /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a -Wl,--end-group -lpthread -lm" --with-lapack-lib="-Wl,--start-group /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_intel_lp64.a /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_sequential.a /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a -Wl,--end-group -lpthread -lm" --with-cxx-dialect=C++11 I would really appreciate any help on how to fix it. Thanks, Bikash -- Bikash S. Kanungo PhD Student Computational Materials Physics Group Mechanical Engineering University of Michigan -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 21 17:51:11 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 21 Mar 2017 17:51:11 -0500 Subject: [petsc-users] jmp_buf error while linking petsc In-Reply-To: References: Message-ID: Did you run 'make test' after insatalling PETSc? Were there errors with this test? [you can send us corresponding test.log] Satish On Tue, 21 Mar 2017, Bikash Kanungo wrote: > Hi, > > I have recently installed petsc-3.7.5. However, while linking petsc with my > code, I get the following compilation error: > > /home/bikashk/softwares/femdft_softwares/petsc/opt_3.7.5_mvapich2_64bit_complex/include/petscdraw.h(335): > error: identifier "jmp_buf" is undefined > PETSC_EXTERN jmp_buf PetscXIOErrorHandlerJumpBuf; > > > I used the following configure options while compiling petsc: > ./configure > --prefix=/home/bikashk/softwares/femdft_softwares/petsc/opt_3.7.5_mvapich2_64bit_complex > --with-debugging=no --with-scalar-type=complex CFLAGS="-O3 -xcore-avx2" > CXXFLAGS="-O3 -xcore-avx2" --with-mpi-dir=/opt/mvapich2/intel/ib/ > -with-blas-lib="-Wl,--start-group > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_intel_lp64.a > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_sequential.a > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a > -Wl,--end-group -lpthread -lm" --with-lapack-lib="-Wl,--start-group > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_intel_lp64.a > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_sequential.a > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a > -Wl,--end-group -lpthread -lm" --with-cxx-dialect=C++11 > > I would really appreciate any help on how to fix it. > > Thanks, > Bikash > From bikash at umich.edu Tue Mar 21 18:01:31 2017 From: bikash at umich.edu (Bikash Kanungo) Date: Tue, 21 Mar 2017 19:01:31 -0400 Subject: [petsc-users] jmp_buf error while linking petsc In-Reply-To: References: Message-ID: Hi Satish, Yes, I ran 'make test' and it didn't throw any error. Here's my test.log: Running test examples to verify correct installation Using PETSC_DIR=/home/bikashk/softwares/femdft_softwares/petsc/opt_3.7.5_mvapich2_64bit_complex and PETSC_ARCH= C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI process C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI processes Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 MPI process Completed test examples Thanks, Bikash On Tue, Mar 21, 2017 at 6:51 PM, Satish Balay wrote: > Did you run 'make test' after insatalling PETSc? Were there errors with > this test? > > [you can send us corresponding test.log] > > Satish > > On Tue, 21 Mar 2017, Bikash Kanungo wrote: > > > Hi, > > > > I have recently installed petsc-3.7.5. However, while linking petsc with > my > > code, I get the following compilation error: > > > > /home/bikashk/softwares/femdft_softwares/petsc/opt_3. > 7.5_mvapich2_64bit_complex/include/petscdraw.h(335): > > error: identifier "jmp_buf" is undefined > > PETSC_EXTERN jmp_buf PetscXIOErrorHandlerJumpBuf; > > > > > > I used the following configure options while compiling petsc: > > ./configure > > --prefix=/home/bikashk/softwares/femdft_softwares/ > petsc/opt_3.7.5_mvapich2_64bit_complex > > --with-debugging=no --with-scalar-type=complex CFLAGS="-O3 -xcore-avx2" > > CXXFLAGS="-O3 -xcore-avx2" --with-mpi-dir=/opt/mvapich2/intel/ib/ > > -with-blas-lib="-Wl,--start-group > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_intel_lp64.a > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_sequential.a > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a > > -Wl,--end-group -lpthread -lm" --with-lapack-lib="-Wl,--start-group > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_intel_lp64.a > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_sequential.a > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a > > -Wl,--end-group -lpthread -lm" --with-cxx-dialect=C++11 > > > > I would really appreciate any help on how to fix it. > > > > Thanks, > > Bikash > > > > -- Bikash S. Kanungo PhD Student Computational Materials Physics Group Mechanical Engineering University of Michigan -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 21 18:09:16 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 21 Mar 2017 18:09:16 -0500 Subject: [petsc-users] jmp_buf error while linking petsc In-Reply-To: References: Message-ID: Ok - Then the issue might be with your makefile. Perhaps its not a petsc formatted makefile. In this case - run 'make getincludedirs getlibs' in PETSC_DIR - and use the compilers, compile/link options listed there. Satish On Tue, 21 Mar 2017, Bikash Kanungo wrote: > Hi Satish, > > Yes, I ran 'make test' and it didn't throw any error. Here's my test.log: > > Running test examples to verify correct installation > Using > PETSC_DIR=/home/bikashk/softwares/femdft_softwares/petsc/opt_3.7.5_mvapich2_64bit_complex > and PETSC_ARCH= > C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI > process > C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI > processes > Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 > MPI process > Completed test examples > > > Thanks, > Bikash > > On Tue, Mar 21, 2017 at 6:51 PM, Satish Balay wrote: > > > Did you run 'make test' after insatalling PETSc? Were there errors with > > this test? > > > > [you can send us corresponding test.log] > > > > Satish > > > > On Tue, 21 Mar 2017, Bikash Kanungo wrote: > > > > > Hi, > > > > > > I have recently installed petsc-3.7.5. However, while linking petsc with > > my > > > code, I get the following compilation error: > > > > > > /home/bikashk/softwares/femdft_softwares/petsc/opt_3. > > 7.5_mvapich2_64bit_complex/include/petscdraw.h(335): > > > error: identifier "jmp_buf" is undefined > > > PETSC_EXTERN jmp_buf PetscXIOErrorHandlerJumpBuf; > > > > > > > > > I used the following configure options while compiling petsc: > > > ./configure > > > --prefix=/home/bikashk/softwares/femdft_softwares/ > > petsc/opt_3.7.5_mvapich2_64bit_complex > > > --with-debugging=no --with-scalar-type=complex CFLAGS="-O3 -xcore-avx2" > > > CXXFLAGS="-O3 -xcore-avx2" --with-mpi-dir=/opt/mvapich2/intel/ib/ > > > -with-blas-lib="-Wl,--start-group > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_intel_lp64.a > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_sequential.a > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a > > > -Wl,--end-group -lpthread -lm" --with-lapack-lib="-Wl,--start-group > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_intel_lp64.a > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_sequential.a > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a > > > -Wl,--end-group -lpthread -lm" --with-cxx-dialect=C++11 > > > > > > I would really appreciate any help on how to fix it. > > > > > > Thanks, > > > Bikash > > > > > > > > > > From balay at mcs.anl.gov Tue Mar 21 18:11:50 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 21 Mar 2017 18:11:50 -0500 Subject: [petsc-users] jmp_buf error while linking petsc In-Reply-To: References: Message-ID: Should have said: make getccompiler getincludedirs getlinklibs Satish On Tue, 21 Mar 2017, Satish Balay wrote: > Ok - Then the issue might be with your makefile. Perhaps its not a > petsc formatted makefile. > > In this case - run 'make getincludedirs getlibs' in PETSC_DIR - and > use the compilers, compile/link options listed there. > > Satish > > On Tue, 21 Mar 2017, Bikash Kanungo wrote: > > > Hi Satish, > > > > Yes, I ran 'make test' and it didn't throw any error. Here's my test.log: > > > > Running test examples to verify correct installation > > Using > > PETSC_DIR=/home/bikashk/softwares/femdft_softwares/petsc/opt_3.7.5_mvapich2_64bit_complex > > and PETSC_ARCH= > > C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI > > process > > C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI > > processes > > Fortran example src/snes/examples/tutorials/ex5f run successfully with 1 > > MPI process > > Completed test examples > > > > > > Thanks, > > Bikash > > > > On Tue, Mar 21, 2017 at 6:51 PM, Satish Balay wrote: > > > > > Did you run 'make test' after insatalling PETSc? Were there errors with > > > this test? > > > > > > [you can send us corresponding test.log] > > > > > > Satish > > > > > > On Tue, 21 Mar 2017, Bikash Kanungo wrote: > > > > > > > Hi, > > > > > > > > I have recently installed petsc-3.7.5. However, while linking petsc with > > > my > > > > code, I get the following compilation error: > > > > > > > > /home/bikashk/softwares/femdft_softwares/petsc/opt_3. > > > 7.5_mvapich2_64bit_complex/include/petscdraw.h(335): > > > > error: identifier "jmp_buf" is undefined > > > > PETSC_EXTERN jmp_buf PetscXIOErrorHandlerJumpBuf; > > > > > > > > > > > > I used the following configure options while compiling petsc: > > > > ./configure > > > > --prefix=/home/bikashk/softwares/femdft_softwares/ > > > petsc/opt_3.7.5_mvapich2_64bit_complex > > > > --with-debugging=no --with-scalar-type=complex CFLAGS="-O3 -xcore-avx2" > > > > CXXFLAGS="-O3 -xcore-avx2" --with-mpi-dir=/opt/mvapich2/intel/ib/ > > > > -with-blas-lib="-Wl,--start-group > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_intel_lp64.a > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_sequential.a > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a > > > > -Wl,--end-group -lpthread -lm" --with-lapack-lib="-Wl,--start-group > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_intel_lp64.a > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_sequential.a > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a > > > > -Wl,--end-group -lpthread -lm" --with-cxx-dialect=C++11 > > > > > > > > I would really appreciate any help on how to fix it. > > > > > > > > Thanks, > > > > Bikash > > > > > > > > > > > > > > > > > > From zhaowenbo.npic at gmail.com Tue Mar 21 19:15:08 2017 From: zhaowenbo.npic at gmail.com (zhaowenbo.npic at gmail.com) Date: Wed, 22 Mar 2017 08:15:08 +0800 Subject: [petsc-users] [petsc-maint] Question about DMDA BOUNDARY_CONDITION set References: <2FE6B746-3744-4237-A834-A05751F9CAFA@mcs.anl.gov> Message-ID: <58d1c216.04ca620a.31450.491c@mx.google.com> An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 21 19:25:52 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 21 Mar 2017 19:25:52 -0500 Subject: [petsc-users] Segmentation fault due to TSDestroy In-Reply-To: References: Message-ID: <3EAB1BB0-1C9F-49DD-8393-E06D01D4CAEA@mcs.anl.gov> Thanks for letting us know, I added some additional error checking (soon to be) in master that will flag the use of incorrect NULL types for function pointers and thus prevent such crashes. Barry > On Mar 20, 2017, at 7:46 AM, Praveen C wrote: > > It turns out the problem was with this > > call TSMonitorSet(ts, Monitor, ctx, PETSC_NULL_OBJECT, ierr); CHKERRQ(ierr) > > The correct way is > > call TSMonitorSet(ts, Monitor, ctx, PETSC_NULL_FUNCTION, ierr); CHKERRQ(ierr) > > Thanks > praveen > > On Sat, Mar 18, 2017 at 9:30 PM, Satish Balay wrote: > Perhaps there is some memory corruption - you can try runnng the code with valgrind. > > Satish > > On Sat, 18 Mar 2017, Praveen C wrote: > > > Dear all > > > > I get a segmentation fault when I call TSDestroy. Without TSDestroy the > > code runs fine. I have included portion of my code below. > > > > subroutine runts(ctx) > > use userdata > > use comdata > > use mtsdata > > implicit none > > #include > > type(tsdata) :: ctx > > ! Local variables > > integer,parameter :: h = 100 ! File id for history file > > TS :: ts > > Vec :: u > > PetscErrorCode :: ierr > > external :: RHSFunction, Monitor > > > > call VecDuplicate(ctx%p%v_res, u, ierr); CHKERRQ(ierr) > > > > ! Copy initial condition into u > > call VecCopy(ctx%p%v_u, u, ierr); CHKERRQ(ierr) > > > > call TSCreate(PETSC_COMM_WORLD, ts, ierr); CHKERRQ(ierr) > > call TSSetProblemType(ts, TS_NONLINEAR, ierr); CHKERRQ(ierr) > > call TSSetRHSFunction(ts, PETSC_NULL_OBJECT, RHSFunction, ctx, ierr); > > CHKERRQ(ierr) > > call TSSetInitialTimeStep(ts, 0.0, dtg, ierr); CHKERRQ(ierr) > > call TSSetType(ts, TSRK, ierr); CHKERRQ(ierr); > > call TSSetDuration(ts, itmax, tfinal, ierr); CHKERRQ(ierr); > > call TSSetExactFinalTime(ts, TS_EXACTFINALTIME_MATCHSTEP, ierr); > > CHKERRQ(ierr); > > call TSMonitorSet(ts, Monitor, ctx, PETSC_NULL_OBJECT, ierr); > > CHKERRQ(ierr) > > call TSSetSolution(ts, u, ierr); CHKERRQ(ierr) > > call TSSetFromOptions(ts, ierr); CHKERRQ(ierr) > > call TSSetUp(ts, ierr); CHKERRQ(ierr) > > > > call TSSolve(ts, u, ierr); CHKERRQ(ierr) > > > > call VecCopy(u, ctx%p%v_u, ierr); CHKERRQ(ierr) > > call VecDestroy(u, ierr); CHKERRQ(ierr) > > call TSDestroy(ts, ierr); CHKERRQ(ierr) > > > > end subroutine runts > > > > Thanks > > praveen > > > > From bikash at umich.edu Tue Mar 21 19:51:14 2017 From: bikash at umich.edu (Bikash Kanungo) Date: Tue, 21 Mar 2017 20:51:14 -0400 Subject: [petsc-users] jmp_buf error while linking petsc In-Reply-To: References: Message-ID: Hi Satish, Thanks for the suggestions. I will change make makefiles and try to make it work. However, I would like to state that the above problem appears only while linking to petsc-3.7.5. My code compiles well with petsc-3.6.3. The configuration options were same for both petsc-3.7.5 and petsc-3.6.3. Thanks again, Bikash On Tue, Mar 21, 2017 at 7:11 PM, Satish Balay wrote: > Should have said: > > make getccompiler getincludedirs getlinklibs > > Satish > > On Tue, 21 Mar 2017, Satish Balay wrote: > > > Ok - Then the issue might be with your makefile. Perhaps its not a > > petsc formatted makefile. > > > > In this case - run 'make getincludedirs getlibs' in PETSC_DIR - and > > use the compilers, compile/link options listed there. > > > > Satish > > > > On Tue, 21 Mar 2017, Bikash Kanungo wrote: > > > > > Hi Satish, > > > > > > Yes, I ran 'make test' and it didn't throw any error. Here's my > test.log: > > > > > > Running test examples to verify correct installation > > > Using > > > PETSC_DIR=/home/bikashk/softwares/femdft_softwares/ > petsc/opt_3.7.5_mvapich2_64bit_complex > > > and PETSC_ARCH= > > > C/C++ example src/snes/examples/tutorials/ex19 run successfully with > 1 MPI > > > process > > > C/C++ example src/snes/examples/tutorials/ex19 run successfully with > 2 MPI > > > processes > > > Fortran example src/snes/examples/tutorials/ex5f run successfully > with 1 > > > MPI process > > > Completed test examples > > > > > > > > > Thanks, > > > Bikash > > > > > > On Tue, Mar 21, 2017 at 6:51 PM, Satish Balay > wrote: > > > > > > > Did you run 'make test' after insatalling PETSc? Were there errors > with > > > > this test? > > > > > > > > [you can send us corresponding test.log] > > > > > > > > Satish > > > > > > > > On Tue, 21 Mar 2017, Bikash Kanungo wrote: > > > > > > > > > Hi, > > > > > > > > > > I have recently installed petsc-3.7.5. However, while linking > petsc with > > > > my > > > > > code, I get the following compilation error: > > > > > > > > > > /home/bikashk/softwares/femdft_softwares/petsc/opt_3. > > > > 7.5_mvapich2_64bit_complex/include/petscdraw.h(335): > > > > > error: identifier "jmp_buf" is undefined > > > > > PETSC_EXTERN jmp_buf PetscXIOErrorHandlerJumpBuf; > > > > > > > > > > > > > > > I used the following configure options while compiling petsc: > > > > > ./configure > > > > > --prefix=/home/bikashk/softwares/femdft_softwares/ > > > > petsc/opt_3.7.5_mvapich2_64bit_complex > > > > > --with-debugging=no --with-scalar-type=complex CFLAGS="-O3 > -xcore-avx2" > > > > > CXXFLAGS="-O3 -xcore-avx2" --with-mpi-dir=/opt/mvapich2/intel/ib/ > > > > > -with-blas-lib="-Wl,--start-group > > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_ > intel_lp64.a > > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_ > sequential.a > > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a > > > > > -Wl,--end-group -lpthread -lm" --with-lapack-lib="-Wl,-- > start-group > > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_ > intel_lp64.a > > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_ > sequential.a > > > > > /opt/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_core.a > > > > > -Wl,--end-group -lpthread -lm" --with-cxx-dialect=C++11 > > > > > > > > > > I would really appreciate any help on how to fix it. > > > > > > > > > > Thanks, > > > > > Bikash > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- Bikash S. Kanungo PhD Student Computational Materials Physics Group Mechanical Engineering University of Michigan -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 21 19:58:06 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 21 Mar 2017 19:58:06 -0500 Subject: [petsc-users] jmp_buf error while linking petsc In-Reply-To: References: Message-ID: On Tue, 21 Mar 2017, Bikash Kanungo wrote: > Hi Satish, > > Thanks for the suggestions. I will change make makefiles and try to make it > work. However, I would like to state that the above problem appears only > while linking to petsc-3.7.5. My code compiles well with petsc-3.6.3. The > configuration options were same for both petsc-3.7.5 and petsc-3.6.3. Since you have non-petsc makefile - you would need to modify compiler options [as listed by the command below] when switching between petsc-3.6.3 and petsc-3.7.5. Any error in this switch [i.e if you kept some petsc-3.6.3 options in the makefile] could result in mixing include files or libraries across these version - resulting in errors. You could check if PETSc makefile format would work for you - so that the makefile is portable - and doesn't get into such isssues. [you would use the same makefile - but switch only PETSC_DIR/PETSC_ARCH values] Satish > On Tue, Mar 21, 2017 at 7:11 PM, Satish Balay wrote: > > > make getccompiler getincludedirs getlinklibs From natacha.bereux at gmail.com Wed Mar 22 05:03:27 2017 From: natacha.bereux at gmail.com (Natacha BEREUX) Date: Wed, 22 Mar 2017 11:03:27 +0100 Subject: [petsc-users] Configure nested PCFIELDSPLIT with general index sets In-Reply-To: References: <6496846F-19F8-4494-87E1-DDC390513370@imperial.ac.uk> Message-ID: Hello, if my understanding is correct, the approach proposed by Matt and Lawrence is the following : - create a DMShell (DMShellCreate) - define my own CreateFieldDecomposition to return the index sets I need (for displacement, pressure and temperature degrees of freedom) : myCreateFieldDecomposition(... ) - set it in the DMShell ( DMShellSetCreateFieldDecomposition) - then sets the DM in KSP context (KSPSetDM) I have some more questions - I did not succeed in setting my own CreateFieldDecomposition in the DMShell : link fails with " unknown reference to ? dmshellsetcreatefielddecomposition_ ?. Could it be a Fortran problem (I am using Fortran)? Is this routine available in PETSc Fortran interface ? - CreateFieldDecomposition is supposed to return an array of dms (to define the fields). I am not able to return such datas. Do I return a PETSC_NULL_OBJECT instead ? - do I have to provide something else to define the DMShell ? Thanks a lot for your help Natacha On Tue, Mar 21, 2017 at 2:44 PM, Natacha BEREUX wrote: > Thanks for your quick answers. To be honest, I am not familiar at all with > DMShells and DMPlexes. But since it is what I need, I am going to try it. > Thanks again for your advices, > Natacha > > On Tue, Mar 21, 2017 at 2:27 PM, Lawrence Mitchell < > lawrence.mitchell at imperial.ac.uk> wrote: > >> >> > On 21 Mar 2017, at 13:24, Matthew Knepley wrote: >> > >> > I think the remedy is as easy as specifying a DMShell that has a >> PetscSection (DMSetDefaultSection) with your ordering, and >> > I think this is how Firedrake (http://www.firedrakeproject.org/) does >> it. >> >> We actually don't use a section, but we do provide >> DMCreateFieldDecomposition_Shell. >> >> If you have a section that describes all the fields, then I think if the >> DMShell knows about it, you effectively get the same behaviour as DMPlex >> (which does the decomposition in the same manner?). >> >> > However, I usually use a DMPlex which knows about my >> > mesh, so I am not sure if this strategy has any holes. >> >> I haven't noticed anything yet. >> >> Lawrence > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 22 06:33:23 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 22 Mar 2017 11:33:23 +0000 Subject: [petsc-users] Configure nested PCFIELDSPLIT with general index sets In-Reply-To: References: <6496846F-19F8-4494-87E1-DDC390513370@imperial.ac.uk> Message-ID: On Wed, Mar 22, 2017 at 10:03 AM, Natacha BEREUX wrote: > Hello, > if my understanding is correct, the approach proposed by Matt and Lawrence > is the following : > - create a DMShell (DMShellCreate) > - define my own CreateFieldDecomposition to return the index sets I need > (for displacement, pressure and temperature degrees of freedom) : > myCreateFieldDecomposition(... ) > - set it in the DMShell ( DMShellSetCreateFieldDecomposition) > - then sets the DM in KSP context (KSPSetDM) > > I have some more questions > - I did not succeed in setting my own CreateFieldDecomposition in the > DMShell : link fails with " unknown reference to ? > dmshellsetcreatefielddecomposition_ ?. Could it be a Fortran problem (I > am using Fortran)? Is this routine available in PETSc Fortran interface ? > \ > Yes, exactly. The Fortran interface for passing function pointers is complex, and no one has added this function yet. > - CreateFieldDecomposition is supposed to return an array of dms (to > define the fields). I am not able to return such datas. Do I return a > PETSC_NULL_OBJECT instead ? > Yes. > - do I have to provide something else to define the DMShell ? > I think you will have to return local and global vectors, but this just means creating a vector of the correct size and distribution. Thanks, Matt > Thanks a lot for your help > Natacha > > On Tue, Mar 21, 2017 at 2:44 PM, Natacha BEREUX > wrote: > >> Thanks for your quick answers. To be honest, I am not familiar at all >> with DMShells and DMPlexes. But since it is what I need, I am going to try >> it. >> Thanks again for your advices, >> Natacha >> >> On Tue, Mar 21, 2017 at 2:27 PM, Lawrence Mitchell < >> lawrence.mitchell at imperial.ac.uk> wrote: >> >>> >>> > On 21 Mar 2017, at 13:24, Matthew Knepley wrote: >>> > >>> > I think the remedy is as easy as specifying a DMShell that has a >>> PetscSection (DMSetDefaultSection) with your ordering, and >>> > I think this is how Firedrake (http://www.firedrakeproject.org/) does >>> it. >>> >>> We actually don't use a section, but we do provide >>> DMCreateFieldDecomposition_Shell. >>> >>> If you have a section that describes all the fields, then I think if the >>> DMShell knows about it, you effectively get the same behaviour as DMPlex >>> (which does the decomposition in the same manner?). >>> >>> > However, I usually use a DMPlex which knows about my >>> > mesh, so I am not sure if this strategy has any holes. >>> >>> I haven't noticed anything yet. >>> >>> Lawrence >> >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Wed Mar 22 09:58:54 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Wed, 22 Mar 2017 14:58:54 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space Message-ID: <1490194734290.31169@marin.nl> I'm solving the Navier-Stokes equations using PCFieldSplit type Schur and Selfp. This particular case has only Neumann conditions for the pressure field. With left preconditioning and no nullspace, I see that the KSP solver for S does not converge (attachment "left_nonullsp") in either norm. When I attach the constant null space to A11, it gets passed on to S and the KSP solver for S does converge in the preconditioned norm only (attachment "left"). However, right preconditioning uses the unpreconditioned norm and therefore doesn't converge (attachment "right"), regardless of whether the nullspace is attached or not. Should I conclude that right preconditioning cannot be used in combination with a null space? Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/H2020-marinergi-project-launched.htm -------------- next part -------------- A non-text attachment was scrubbed... Name: left_nonullsp Type: application/octet-stream Size: 21683 bytes Desc: left_nonullsp URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: left Type: application/octet-stream Size: 11965 bytes Desc: left URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: right Type: application/octet-stream Size: 21966 bytes Desc: right URL: From hng.email at gmail.com Wed Mar 22 10:20:35 2017 From: hng.email at gmail.com (Hom Nath Gharti) Date: Wed, 22 Mar 2017 11:20:35 -0400 Subject: [petsc-users] CMake and PETSc Message-ID: Dear all, Does FindPETSc.cmake (https://github.com/jedbrown/cmake-modules) work with Fortran as well? Thanks, Hom From aherrema at iastate.edu Wed Mar 22 10:38:44 2017 From: aherrema at iastate.edu (Austin Herrema) Date: Wed, 22 Mar 2017 10:38:44 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code Message-ID: Hello all, I am trying to do as the subject line describes--use f2py to run a large PETSc/SLEPc fortran finite element code through python. I really only need to wrap the outermost function of the fortran code--don't need any access to subroutines. I'll describe what I'm doing, some of which I'm not 100% confident is correct (not much experience with f2py)--feel free to correct/redirect any of it. First, I'm editing the fortran code so that the top-level function is a subroutine rather than a main program (it's my understanding that this is required for f2py?). I use my regular makefile (modeled after a standard SLEPc makefile from the user guide) to compile all of the .f90/.F90 files (many of them) to .o files using SLEPc/PETSc rules. The final linking phase fails since there isn't a main program, but I'm just ignoring that for now since that's not what I ultimately need... Using a python script, I set up and run the f2py command. Right now it has the form... "f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o..." etc. This appears to work, but upon attempting to import, it cannot find the SLEPc (and, I presume, PETSc) objects/functions: >>> import mod_name Traceback (most recent call last): File "", line 1, in ImportError: dlopen(./mod_name.so, 2): Symbol not found: _epscreate_ Referenced from: ./mod_name.so Expected in: flat namespace in ./mod_name.so Based on this discussion , I believe I need to somehow include PETSc/SLEPc info when linking with f2py. Is that correct? Any direction on how to do that? I don't quite understand what the OP of that question ultimately ended up doing to get it to work. I tried using the -I flag pointing to the slepc_common file (like the SLEPc makefile does). The problem is that that is a file, not a directory, which contains a number of other makefile-style variables--so it works to include it in a makefile, but doesn't work in python. Maybe there are only a few directories I really need to include? Or is it possible to somehow run f2py through a makefile? I'm a bit ignorant in that realm as well. Thank you for any help or suggestions! Austin -- *Austin Herrema* PhD Student | Graduate Research Assistant | Iowa State University Wind Energy Science, Engineering, and Policy | Mechanical Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 22 10:47:16 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 22 Mar 2017 15:47:16 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <1490194734290.31169@marin.nl> References: <1490194734290.31169@marin.nl> Message-ID: On Wed, Mar 22, 2017 at 2:58 PM, Klaij, Christiaan wrote: > I'm solving the Navier-Stokes equations using PCFieldSplit type > Schur and Selfp. This particular case has only Neumann conditions > for the pressure field. > > With left preconditioning and no nullspace, I see that the KSP > solver for S does not converge (attachment "left_nonullsp") in > either norm. > > When I attach the constant null space to A11, it gets passed on > to S and the KSP solver for S does converge in the preconditioned > norm only (attachment "left"). > > However, right preconditioning uses the unpreconditioned norm and > therefore doesn't converge (attachment "right"), regardless of > whether the nullspace is attached or not. Should I conclude that > right preconditioning cannot be used in combination with a null > space? > No, neither of your solves is working. The left preconditioned version is just hiding that fact. You should start by checking that the exact solve works. Namely full Schur factorization with exact solves for A and S. Since you do not have a matrix for S (unless you tell it do use "full"), just use a very low (1e-10) tolerance. My guess is that something is off with your null space specification. Thanks, Matt > Chris > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/H2020-marinergi- > project-launched.htm > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Wed Mar 22 11:04:42 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Wed, 22 Mar 2017 16:04:42 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: References: <1490194734290.31169@marin.nl>, Message-ID: <1490198682595.54991@marin.nl> Thanks Matt, I will try your suggestion and let you know. In the meantime this is what I did to set the constant null space: call MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL_OBJECT,nullsp,ierr); CHKERRQ(ierr) call MatSetNullSpace(aa_sub(4),nullsp,ierr); CHKERRQ(ierr) call MatNullSpaceDestroy(nullsp,ierr); CHKERRQ(ierr) where aa_sub(4) corresponds to A11. This is called before begin/end mat assembly. Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl | www.marin.nl [LinkedIn] [YouTube] [Twitter] [Facebook] MARIN news: Software seminar in Shanghai for the first time, March 28 ________________________________ From: Matthew Knepley Sent: Wednesday, March 22, 2017 4:47 PM To: Klaij, Christiaan Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space On Wed, Mar 22, 2017 at 2:58 PM, Klaij, Christiaan > wrote: I'm solving the Navier-Stokes equations using PCFieldSplit type Schur and Selfp. This particular case has only Neumann conditions for the pressure field. With left preconditioning and no nullspace, I see that the KSP solver for S does not converge (attachment "left_nonullsp") in either norm. When I attach the constant null space to A11, it gets passed on to S and the KSP solver for S does converge in the preconditioned norm only (attachment "left"). However, right preconditioning uses the unpreconditioned norm and therefore doesn't converge (attachment "right"), regardless of whether the nullspace is attached or not. Should I conclude that right preconditioning cannot be used in combination with a null space? No, neither of your solves is working. The left preconditioned version is just hiding that fact. You should start by checking that the exact solve works. Namely full Schur factorization with exact solves for A and S. Since you do not have a matrix for S (unless you tell it do use "full"), just use a very low (1e-10) tolerance. My guess is that something is off with your null space specification. Thanks, Matt Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/H2020-marinergi-project-launched.htm -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image38eecd.PNG Type: image/png Size: 293 bytes Desc: image38eecd.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagef3a480.PNG Type: image/png Size: 331 bytes Desc: imagef3a480.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image9d5515.PNG Type: image/png Size: 333 bytes Desc: image9d5515.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagec62fb8.PNG Type: image/png Size: 253 bytes Desc: imagec62fb8.PNG URL: From jroman at dsic.upv.es Wed Mar 22 11:20:06 2017 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 22 Mar 2017 17:20:06 +0100 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: Message-ID: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> Try the following: $ cd $SLEPC_DIR $ make getlinklibs_slepc Then copy the output and paste it at the end of your f2py command. Jose > El 22 mar 2017, a las 16:38, Austin Herrema escribi?: > > Hello all, > > I am trying to do as the subject line describes--use f2py to run a large PETSc/SLEPc fortran finite element code through python. I really only need to wrap the outermost function of the fortran code--don't need any access to subroutines. I'll describe what I'm doing, some of which I'm not 100% confident is correct (not much experience with f2py)--feel free to correct/redirect any of it. > > First, I'm editing the fortran code so that the top-level function is a subroutine rather than a main program (it's my understanding that this is required for f2py?). > > I use my regular makefile (modeled after a standard SLEPc makefile from the user guide) to compile all of the .f90/.F90 files (many of them) to .o files using SLEPc/PETSc rules. The final linking phase fails since there isn't a main program, but I'm just ignoring that for now since that's not what I ultimately need... > > Using a python script, I set up and run the f2py command. Right now it has the form... > "f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o..." etc. > > This appears to work, but upon attempting to import, it cannot find the SLEPc (and, I presume, PETSc) objects/functions: > > >>> import mod_name > Traceback (most recent call last): > File "", line 1, in > ImportError: dlopen(./mod_name.so, 2): Symbol not found: _epscreate_ > Referenced from: ./mod_name.so > Expected in: flat namespace > in ./mod_name.so > > Based on this discussion, I believe I need to somehow include PETSc/SLEPc info when linking with f2py. Is that correct? Any direction on how to do that? I don't quite understand what the OP of that question ultimately ended up doing to get it to work. I tried using the -I flag pointing to the slepc_common file (like the SLEPc makefile does). The problem is that that is a file, not a directory, which contains a number of other makefile-style variables--so it works to include it in a makefile, but doesn't work in python. Maybe there are only a few directories I really need to include? Or is it possible to somehow run f2py through a makefile? I'm a bit ignorant in that realm as well. > > Thank you for any help or suggestions! > Austin > > > -- > Austin Herrema > PhD Student | Graduate Research Assistant | Iowa State University > Wind Energy Science, Engineering, and Policy | Mechanical Engineering From knepley at gmail.com Wed Mar 22 11:39:00 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 22 Mar 2017 16:39:00 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <1490198682595.54991@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> Message-ID: On Wed, Mar 22, 2017 at 4:04 PM, Klaij, Christiaan wrote: > Thanks Matt, I will try your suggestion and let you know. In the > meantime this is what I did to set the constant null space: > > call MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL_OBJECT,nullsp,ierr); > CHKERRQ(ierr) > call MatSetNullSpace(aa_sub(4),nullsp,ierr); CHKERRQ(ierr) > call MatNullSpaceDestroy(nullsp,ierr); CHKERRQ(ierr) > > where aa_sub(4) corresponds to A11. This is called before > begin/end mat assembly. > Hmm, I wonder if the problem is that A11 has the nullspace, but the PC pmat is actually B^T diag(A)^{-1} B. I have to look where the solver is actually taking the nullspace from. We need to improve the -ksp_view output to make this stuff much easier to see. Matt > Chris > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 <+31%20317%20493%20344> | C.Klaij at marin.nl | > www.marin.nl > > [image: LinkedIn] [image: > YouTube] [image: Twitter] > [image: Facebook] > > MARIN news: Software seminar in Shanghai for the first time, March 28 > > > ------------------------------ > *From:* Matthew Knepley > *Sent:* Wednesday, March 22, 2017 4:47 PM > *To:* Klaij, Christiaan > *Cc:* petsc-users at mcs.anl.gov > *Subject:* Re: [petsc-users] left and right preconditioning with a > constant null space > > On Wed, Mar 22, 2017 at 2:58 PM, Klaij, Christiaan > wrote: > >> I'm solving the Navier-Stokes equations using PCFieldSplit type >> Schur and Selfp. This particular case has only Neumann conditions >> for the pressure field. >> >> With left preconditioning and no nullspace, I see that the KSP >> solver for S does not converge (attachment "left_nonullsp") in >> either norm. >> >> When I attach the constant null space to A11, it gets passed on >> to S and the KSP solver for S does converge in the preconditioned >> norm only (attachment "left"). >> >> However, right preconditioning uses the unpreconditioned norm and >> therefore doesn't converge (attachment "right"), regardless of >> whether the nullspace is attached or not. Should I conclude that >> right preconditioning cannot be used in combination with a null >> space? >> > > No, neither of your solves is working. The left preconditioned version is > just hiding that fact. > > You should start by checking that the exact solve works. Namely full Schur > factorization > with exact solves for A and S. Since you do not have a matrix for S > (unless you tell it do > use "full"), just use a very low (1e-10) tolerance. My guess is that > something is off with > your null space specification. > > Thanks, > > Matt > > >> Chris >> >> >> dr. ir. Christiaan Klaij | Senior Researcher | Research & Development >> MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | >> http://www.marin.nl >> >> MARIN news: http://www.marin.nl/web/News/News-items/H2020-marinergi-proj >> ect-launched.htm >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagef3a480.PNG Type: image/png Size: 331 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image9d5515.PNG Type: image/png Size: 333 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image38eecd.PNG Type: image/png Size: 293 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagec62fb8.PNG Type: image/png Size: 253 bytes Desc: not available URL: From lawrence.mitchell at imperial.ac.uk Wed Mar 22 11:59:40 2017 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Wed, 22 Mar 2017 16:59:40 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> Message-ID: <09D96A50-3070-4D55-B2E7-E251F1F17C35@imperial.ac.uk> > On 22 Mar 2017, at 16:39, Matthew Knepley wrote: > > Hmm, I wonder if the problem is that A11 has the nullspace, but the PC pmat is actually B^T diag(A)^{-1} B. I have > to look where the solver is actually taking the nullspace from. We need to improve the -ksp_view output to make this > stuff much easier to see. I think the nullspace for S comes from the nullspace you have composed, using PetscObjectCompose, with the IS defining the a11 block: https://bitbucket.org/petsc/petsc/src/9914757c790456e4369968b050152728564cdbae/src/ksp/pc/impls/fieldsplit/fieldsplit.c?at=master&fileviewer=file-view-default#fieldsplit.c-587 Then, if the pmat has an attached nullspace, that overrides it: https://bitbucket.org/petsc/petsc/src/9914757c790456e4369968b050152728564cdbae/src/ksp/pc/impls/fieldsplit/fieldsplit.c?at=master&fileviewer=file-view-default#fieldsplit.c-718 But with selfp, you have no control over pmat, so you need to send the nullspace in via the IS. Lawrence -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Mar 22 12:29:27 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 22 Mar 2017 12:29:27 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> Message-ID: <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Lisandro, We've had a couple questions similar to this with f2py; is there a way we could add to the PETSc/SLEPc makefile rules something to allow people to trivially use f2py without having to make their own (often incorrect) manual command lines? Thanks Barry > On Mar 22, 2017, at 11:20 AM, Jose E. Roman wrote: > > Try the following: > $ cd $SLEPC_DIR > $ make getlinklibs_slepc > Then copy the output and paste it at the end of your f2py command. > > Jose > > >> El 22 mar 2017, a las 16:38, Austin Herrema escribi?: >> >> Hello all, >> >> I am trying to do as the subject line describes--use f2py to run a large PETSc/SLEPc fortran finite element code through python. I really only need to wrap the outermost function of the fortran code--don't need any access to subroutines. I'll describe what I'm doing, some of which I'm not 100% confident is correct (not much experience with f2py)--feel free to correct/redirect any of it. >> >> First, I'm editing the fortran code so that the top-level function is a subroutine rather than a main program (it's my understanding that this is required for f2py?). >> >> I use my regular makefile (modeled after a standard SLEPc makefile from the user guide) to compile all of the .f90/.F90 files (many of them) to .o files using SLEPc/PETSc rules. The final linking phase fails since there isn't a main program, but I'm just ignoring that for now since that's not what I ultimately need... >> >> Using a python script, I set up and run the f2py command. Right now it has the form... >> "f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o..." etc. >> >> This appears to work, but upon attempting to import, it cannot find the SLEPc (and, I presume, PETSc) objects/functions: >> >>>>> import mod_name >> Traceback (most recent call last): >> File "", line 1, in >> ImportError: dlopen(./mod_name.so, 2): Symbol not found: _epscreate_ >> Referenced from: ./mod_name.so >> Expected in: flat namespace >> in ./mod_name.so >> >> Based on this discussion, I believe I need to somehow include PETSc/SLEPc info when linking with f2py. Is that correct? Any direction on how to do that? I don't quite understand what the OP of that question ultimately ended up doing to get it to work. I tried using the -I flag pointing to the slepc_common file (like the SLEPc makefile does). The problem is that that is a file, not a directory, which contains a number of other makefile-style variables--so it works to include it in a makefile, but doesn't work in python. Maybe there are only a few directories I really need to include? Or is it possible to somehow run f2py through a makefile? I'm a bit ignorant in that realm as well. >> >> Thank you for any help or suggestions! >> Austin >> >> >> -- >> Austin Herrema >> PhD Student | Graduate Research Assistant | Iowa State University >> Wind Energy Science, Engineering, and Policy | Mechanical Engineering > From aherrema at iastate.edu Wed Mar 22 13:08:22 2017 From: aherrema at iastate.edu (Austin Herrema) Date: Wed, 22 Mar 2017 18:08:22 +0000 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> Message-ID: Thank you for the suggestion! Seems like a reasonable way to go. Not working for me, however, I suspect because I'm using homebrew installations of PETSc and SLEPc (I don't think all the makefiles are kept). Any other way to do the same thing by chance? Worst case I could use a non-homebrew installation but I'd prefer not to mess with that if I can help it... Thanks, Austin On Wed, Mar 22, 2017 at 11:20 AM Jose E. Roman wrote: > Try the following: > $ cd $SLEPC_DIR > $ make getlinklibs_slepc > Then copy the output and paste it at the end of your f2py command. > > Jose > > > > El 22 mar 2017, a las 16:38, Austin Herrema > escribi?: > > > > Hello all, > > > > I am trying to do as the subject line describes--use f2py to run a large > PETSc/SLEPc fortran finite element code through python. I really only need > to wrap the outermost function of the fortran code--don't need any access > to subroutines. I'll describe what I'm doing, some of which I'm not 100% > confident is correct (not much experience with f2py)--feel free to > correct/redirect any of it. > > > > First, I'm editing the fortran code so that the top-level function is a > subroutine rather than a main program (it's my understanding that this is > required for f2py?). > > > > I use my regular makefile (modeled after a standard SLEPc makefile from > the user guide) to compile all of the .f90/.F90 files (many of them) to .o > files using SLEPc/PETSc rules. The final linking phase fails since there > isn't a main program, but I'm just ignoring that for now since that's not > what I ultimately need... > > > > Using a python script, I set up and run the f2py command. Right now it > has the form... > > "f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o..." etc. > > > > This appears to work, but upon attempting to import, it cannot find the > SLEPc (and, I presume, PETSc) objects/functions: > > > > >>> import mod_name > > Traceback (most recent call last): > > File "", line 1, in > > ImportError: dlopen(./mod_name.so, 2): Symbol not found: _epscreate_ > > Referenced from: ./mod_name.so > > Expected in: flat namespace > > in ./mod_name.so > > > > Based on this discussion, I believe I need to somehow include > PETSc/SLEPc info when linking with f2py. Is that correct? Any direction on > how to do that? I don't quite understand what the OP of that question > ultimately ended up doing to get it to work. I tried using the -I flag > pointing to the slepc_common file (like the SLEPc makefile does). The > problem is that that is a file, not a directory, which contains a number of > other makefile-style variables--so it works to include it in a makefile, > but doesn't work in python. Maybe there are only a few directories I really > need to include? Or is it possible to somehow run f2py through a makefile? > I'm a bit ignorant in that realm as well. > > > > Thank you for any help or suggestions! > > Austin > > > > > > -- > > Austin Herrema > > PhD Student | Graduate Research Assistant | Iowa State University > > Wind Energy Science, Engineering, and Policy | Mechanical Engineering > > -- *Austin Herrema* PhD Student | Graduate Research Assistant | Iowa State University Wind Energy Science, Engineering, and Policy | Mechanical Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Mar 22 13:23:35 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 22 Mar 2017 13:23:35 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> Message-ID: > On Mar 22, 2017, at 1:08 PM, Austin Herrema wrote: > > Thank you for the suggestion! Seems like a reasonable way to go. Not working for me, however, I suspect because I'm using homebrew installations of PETSc and SLEPc (I don't think all the makefiles are kept). Any other way to do the same thing by chance? Worst case I could use a non-homebrew installation but I'd prefer not to mess with that if I can help it... How do you link a "regular" SLEPc C program using the home-brew libraries? You need basically the same link line for f2py as you need for C programs. > > Thanks, > Austin > > On Wed, Mar 22, 2017 at 11:20 AM Jose E. Roman wrote: > Try the following: > $ cd $SLEPC_DIR > $ make getlinklibs_slepc > Then copy the output and paste it at the end of your f2py command. > > Jose > > > > El 22 mar 2017, a las 16:38, Austin Herrema escribi?: > > > > Hello all, > > > > I am trying to do as the subject line describes--use f2py to run a large PETSc/SLEPc fortran finite element code through python. I really only need to wrap the outermost function of the fortran code--don't need any access to subroutines. I'll describe what I'm doing, some of which I'm not 100% confident is correct (not much experience with f2py)--feel free to correct/redirect any of it. > > > > First, I'm editing the fortran code so that the top-level function is a subroutine rather than a main program (it's my understanding that this is required for f2py?). > > > > I use my regular makefile (modeled after a standard SLEPc makefile from the user guide) to compile all of the .f90/.F90 files (many of them) to .o files using SLEPc/PETSc rules. The final linking phase fails since there isn't a main program, but I'm just ignoring that for now since that's not what I ultimately need... > > > > Using a python script, I set up and run the f2py command. Right now it has the form... > > "f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o..." etc. > > > > This appears to work, but upon attempting to import, it cannot find the SLEPc (and, I presume, PETSc) objects/functions: > > > > >>> import mod_name > > Traceback (most recent call last): > > File "", line 1, in > > ImportError: dlopen(./mod_name.so, 2): Symbol not found: _epscreate_ > > Referenced from: ./mod_name.so > > Expected in: flat namespace > > in ./mod_name.so > > > > Based on this discussion, I believe I need to somehow include PETSc/SLEPc info when linking with f2py. Is that correct? Any direction on how to do that? I don't quite understand what the OP of that question ultimately ended up doing to get it to work. I tried using the -I flag pointing to the slepc_common file (like the SLEPc makefile does). The problem is that that is a file, not a directory, which contains a number of other makefile-style variables--so it works to include it in a makefile, but doesn't work in python. Maybe there are only a few directories I really need to include? Or is it possible to somehow run f2py through a makefile? I'm a bit ignorant in that realm as well. > > > > Thank you for any help or suggestions! > > Austin > > > > > > -- > > Austin Herrema > > PhD Student | Graduate Research Assistant | Iowa State University > > Wind Energy Science, Engineering, and Policy | Mechanical Engineering > > -- > Austin Herrema > PhD Student | Graduate Research Assistant | Iowa State University > Wind Energy Science, Engineering, and Policy | Mechanical Engineering From jroman at dsic.upv.es Wed Mar 22 13:39:13 2017 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 22 Mar 2017 19:39:13 +0100 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> Message-ID: <51713281-854F-4E7A-8438-AA8F453B5CA5@dsic.upv.es> > El 22 mar 2017, a las 19:23, Barry Smith escribi?: > > >> On Mar 22, 2017, at 1:08 PM, Austin Herrema wrote: >> >> Thank you for the suggestion! Seems like a reasonable way to go. Not working for me, however, I suspect because I'm using homebrew installations of PETSc and SLEPc (I don't think all the makefiles are kept). Any other way to do the same thing by chance? Worst case I could use a non-homebrew installation but I'd prefer not to mess with that if I can help it... > > How do you link a "regular" SLEPc C program using the home-brew libraries? You need basically the same link line for f2py as you need for C programs. What Barry may be suggesting is: instead of using a script to invoke f2py, add a rule to your makefile modname.so: outer_driver.f90 f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o ${SLEPC_EPS_LIB} Then 'make modname.so' will pick the libraries from SLEPc makefiles. Jose > >> >> Thanks, >> Austin >> >> On Wed, Mar 22, 2017 at 11:20 AM Jose E. Roman wrote: >> Try the following: >> $ cd $SLEPC_DIR >> $ make getlinklibs_slepc >> Then copy the output and paste it at the end of your f2py command. >> >> Jose >> >> >>> El 22 mar 2017, a las 16:38, Austin Herrema escribi?: >>> >>> Hello all, >>> >>> I am trying to do as the subject line describes--use f2py to run a large PETSc/SLEPc fortran finite element code through python. I really only need to wrap the outermost function of the fortran code--don't need any access to subroutines. I'll describe what I'm doing, some of which I'm not 100% confident is correct (not much experience with f2py)--feel free to correct/redirect any of it. >>> >>> First, I'm editing the fortran code so that the top-level function is a subroutine rather than a main program (it's my understanding that this is required for f2py?). >>> >>> I use my regular makefile (modeled after a standard SLEPc makefile from the user guide) to compile all of the .f90/.F90 files (many of them) to .o files using SLEPc/PETSc rules. The final linking phase fails since there isn't a main program, but I'm just ignoring that for now since that's not what I ultimately need... >>> >>> Using a python script, I set up and run the f2py command. Right now it has the form... >>> "f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o..." etc. >>> >>> This appears to work, but upon attempting to import, it cannot find the SLEPc (and, I presume, PETSc) objects/functions: >>> >>>>>> import mod_name >>> Traceback (most recent call last): >>> File "", line 1, in >>> ImportError: dlopen(./mod_name.so, 2): Symbol not found: _epscreate_ >>> Referenced from: ./mod_name.so >>> Expected in: flat namespace >>> in ./mod_name.so >>> >>> Based on this discussion, I believe I need to somehow include PETSc/SLEPc info when linking with f2py. Is that correct? Any direction on how to do that? I don't quite understand what the OP of that question ultimately ended up doing to get it to work. I tried using the -I flag pointing to the slepc_common file (like the SLEPc makefile does). The problem is that that is a file, not a directory, which contains a number of other makefile-style variables--so it works to include it in a makefile, but doesn't work in python. Maybe there are only a few directories I really need to include? Or is it possible to somehow run f2py through a makefile? I'm a bit ignorant in that realm as well. >>> >>> Thank you for any help or suggestions! >>> Austin >>> >>> >>> -- >>> Austin Herrema >>> PhD Student | Graduate Research Assistant | Iowa State University >>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >> >> -- >> Austin Herrema >> PhD Student | Graduate Research Assistant | Iowa State University >> Wind Energy Science, Engineering, and Policy | Mechanical Engineering > From natacha.bereux at gmail.com Wed Mar 22 13:45:48 2017 From: natacha.bereux at gmail.com (Natacha BEREUX) Date: Wed, 22 Mar 2017 19:45:48 +0100 Subject: [petsc-users] Configure nested PCFIELDSPLIT with general index sets In-Reply-To: References: <6496846F-19F8-4494-87E1-DDC390513370@imperial.ac.uk> Message-ID: Hello Matt, Thanks a lot for your answers. Since I am working on a large FEM Fortran code, I have to stick to Fortran. Do you know if someone plans to add this Fortran interface? Or may be I could do it myself ? Is this particular interface very hard to add ? Perhaps could I mimic some other interface ? What would you advise ? Best regards, Natacha On Wed, Mar 22, 2017 at 12:33 PM, Matthew Knepley wrote: > On Wed, Mar 22, 2017 at 10:03 AM, Natacha BEREUX > wrote: > >> Hello, >> if my understanding is correct, the approach proposed by Matt and >> Lawrence is the following : >> - create a DMShell (DMShellCreate) >> - define my own CreateFieldDecomposition to return the index sets I need >> (for displacement, pressure and temperature degrees of freedom) : >> myCreateFieldDecomposition(... ) >> - set it in the DMShell ( DMShellSetCreateFieldDecomposition) >> - then sets the DM in KSP context (KSPSetDM) >> >> I have some more questions >> - I did not succeed in setting my own CreateFieldDecomposition in the >> DMShell : link fails with " unknown reference to ? >> dmshellsetcreatefielddecomposition_ ?. Could it be a Fortran problem (I >> am using Fortran)? Is this routine available in PETSc Fortran interface ? >> \ >> > > Yes, exactly. The Fortran interface for passing function pointers is > complex, and no one has added this function yet. > > >> - CreateFieldDecomposition is supposed to return an array of dms (to >> define the fields). I am not able to return such datas. Do I return a >> PETSC_NULL_OBJECT instead ? >> > > Yes. > > >> - do I have to provide something else to define the DMShell ? >> > > I think you will have to return local and global vectors, but this just > means creating a vector of the correct size and distribution. > > Thanks, > > Matt > > >> Thanks a lot for your help >> Natacha >> >> On Tue, Mar 21, 2017 at 2:44 PM, Natacha BEREUX > > wrote: >> >>> Thanks for your quick answers. To be honest, I am not familiar at all >>> with DMShells and DMPlexes. But since it is what I need, I am going to try >>> it. >>> Thanks again for your advices, >>> Natacha >>> >>> On Tue, Mar 21, 2017 at 2:27 PM, Lawrence Mitchell < >>> lawrence.mitchell at imperial.ac.uk> wrote: >>> >>>> >>>> > On 21 Mar 2017, at 13:24, Matthew Knepley wrote: >>>> > >>>> > I think the remedy is as easy as specifying a DMShell that has a >>>> PetscSection (DMSetDefaultSection) with your ordering, and >>>> > I think this is how Firedrake (http://www.firedrakeproject.org/) >>>> does it. >>>> >>>> We actually don't use a section, but we do provide >>>> DMCreateFieldDecomposition_Shell. >>>> >>>> If you have a section that describes all the fields, then I think if >>>> the DMShell knows about it, you effectively get the same behaviour as >>>> DMPlex (which does the decomposition in the same manner?). >>>> >>>> > However, I usually use a DMPlex which knows about my >>>> > mesh, so I am not sure if this strategy has any holes. >>>> >>>> I haven't noticed anything yet. >>>> >>>> Lawrence >>> >>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Wed Mar 22 13:51:06 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 22 Mar 2017 13:51:06 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: <51713281-854F-4E7A-8438-AA8F453B5CA5@dsic.upv.es> References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <51713281-854F-4E7A-8438-AA8F453B5CA5@dsic.upv.es> Message-ID: On Wed, 22 Mar 2017, Jose E. Roman wrote: > > > El 22 mar 2017, a las 19:23, Barry Smith escribi?: > > > > > >> On Mar 22, 2017, at 1:08 PM, Austin Herrema wrote: > >> > >> Thank you for the suggestion! Seems like a reasonable way to go. Not working for me, however, I suspect because I'm using homebrew installations of PETSc and SLEPc (I don't think all the makefiles are kept). Any other way to do the same thing by chance? Worst case I could use a non-homebrew installation but I'd prefer not to mess with that if I can help it... > > > > How do you link a "regular" SLEPc C program using the home-brew libraries? You need basically the same link line for f2py as you need for C programs. > > > What Barry may be suggesting is: instead of using a script to invoke f2py, add a rule to your makefile > > modname.so: outer_driver.f90 > f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o ${SLEPC_EPS_LIB} > > Then 'make modname.so' will pick the libraries from SLEPc makefiles. I think you would also need a different compile target [apart from the link target above]. And using a diffent suffix for f2py sourcefiles - say '.F90py' might help.. .SUFFIXES: .f90py .f90py.o: f2py -c ${FC_FLAGS} ${FFLAGS} -o $@ $< Satish From aherrema at iastate.edu Wed Mar 22 14:08:53 2017 From: aherrema at iastate.edu (Austin Herrema) Date: Wed, 22 Mar 2017 14:08:53 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: <51713281-854F-4E7A-8438-AA8F453B5CA5@dsic.upv.es> References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <51713281-854F-4E7A-8438-AA8F453B5CA5@dsic.upv.es> Message-ID: Makes sense, and definitely seems to be a more natural way to go now that I see it. When compiling using this rule it seems to get close but doesn't compile all the way. Here is the output (in reality, what I was referring to as "modname.so" is "iga_blade_py.so" and the "outer_driver.f90" is called merely "run_analysis.f90"--sorry for the confusion): running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building extension "iga_blade_py" sources f2py options: [] f2py:> /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7/iga_blade_pymodule.c creating /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7 Reading fortran codes... Reading file 'run_analysis.f90' (format:free) Post-processing... Block: iga_blade_py Block: run_analysis Post-processing (stage 2)... Building modules... Building module "iga_blade_py"... Constructing wrapper function "run_analysis"... run_analysis() Wrote C/API module "iga_blade_py" to file "/tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7/iga_blade_pymodule.c" adding '/tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7/fortranobject.c' to sources. adding '/tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7' to include_dirs. copying /usr/local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7 copying /usr/local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7 build_src: building npy-pkg config files running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler Found executable /usr/local/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_ext building 'iga_blade_py' extension compiling C sources C compiler: clang -fno-strict-aliasing -fno-common -dynamic -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes error: unknown file type '' (from '-Wl,-rpath,/usr/local/Cellar/slepc/3.7.3_4/real/lib') make: *** [iga_blade_py.so] Error 1 On Wed, Mar 22, 2017 at 1:39 PM, Jose E. Roman wrote: > > > El 22 mar 2017, a las 19:23, Barry Smith escribi?: > > > > > >> On Mar 22, 2017, at 1:08 PM, Austin Herrema > wrote: > >> > >> Thank you for the suggestion! Seems like a reasonable way to go. Not > working for me, however, I suspect because I'm using homebrew installations > of PETSc and SLEPc (I don't think all the makefiles are kept). Any other > way to do the same thing by chance? Worst case I could use a non-homebrew > installation but I'd prefer not to mess with that if I can help it... > > > > How do you link a "regular" SLEPc C program using the home-brew > libraries? You need basically the same link line for f2py as you need for C > programs. > > > What Barry may be suggesting is: instead of using a script to invoke f2py, > add a rule to your makefile > > modname.so: outer_driver.f90 > f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o > ${SLEPC_EPS_LIB} > > Then 'make modname.so' will pick the libraries from SLEPc makefiles. > > Jose > > > > >> > >> Thanks, > >> Austin > >> > >> On Wed, Mar 22, 2017 at 11:20 AM Jose E. Roman > wrote: > >> Try the following: > >> $ cd $SLEPC_DIR > >> $ make getlinklibs_slepc > >> Then copy the output and paste it at the end of your f2py command. > >> > >> Jose > >> > >> > >>> El 22 mar 2017, a las 16:38, Austin Herrema > escribi?: > >>> > >>> Hello all, > >>> > >>> I am trying to do as the subject line describes--use f2py to run a > large PETSc/SLEPc fortran finite element code through python. I really only > need to wrap the outermost function of the fortran code--don't need any > access to subroutines. I'll describe what I'm doing, some of which I'm not > 100% confident is correct (not much experience with f2py)--feel free to > correct/redirect any of it. > >>> > >>> First, I'm editing the fortran code so that the top-level function is > a subroutine rather than a main program (it's my understanding that this is > required for f2py?). > >>> > >>> I use my regular makefile (modeled after a standard SLEPc makefile > from the user guide) to compile all of the .f90/.F90 files (many of them) > to .o files using SLEPc/PETSc rules. The final linking phase fails since > there isn't a main program, but I'm just ignoring that for now since that's > not what I ultimately need... > >>> > >>> Using a python script, I set up and run the f2py command. Right now it > has the form... > >>> "f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o..." etc. > >>> > >>> This appears to work, but upon attempting to import, it cannot find > the SLEPc (and, I presume, PETSc) objects/functions: > >>> > >>>>>> import mod_name > >>> Traceback (most recent call last): > >>> File "", line 1, in > >>> ImportError: dlopen(./mod_name.so, 2): Symbol not found: _epscreate_ > >>> Referenced from: ./mod_name.so > >>> Expected in: flat namespace > >>> in ./mod_name.so > >>> > >>> Based on this discussion, I believe I need to somehow include > PETSc/SLEPc info when linking with f2py. Is that correct? Any direction on > how to do that? I don't quite understand what the OP of that question > ultimately ended up doing to get it to work. I tried using the -I flag > pointing to the slepc_common file (like the SLEPc makefile does). The > problem is that that is a file, not a directory, which contains a number of > other makefile-style variables--so it works to include it in a makefile, > but doesn't work in python. Maybe there are only a few directories I really > need to include? Or is it possible to somehow run f2py through a makefile? > I'm a bit ignorant in that realm as well. > >>> > >>> Thank you for any help or suggestions! > >>> Austin > >>> > >>> > >>> -- > >>> Austin Herrema > >>> PhD Student | Graduate Research Assistant | Iowa State University > >>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering > >> > >> -- > >> Austin Herrema > >> PhD Student | Graduate Research Assistant | Iowa State University > >> Wind Energy Science, Engineering, and Policy | Mechanical Engineering > > > > -- *Austin Herrema* PhD Student | Graduate Research Assistant | Iowa State University Wind Energy Science, Engineering, and Policy | Mechanical Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Mar 22 18:07:38 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 22 Mar 2017 18:07:38 -0500 Subject: [petsc-users] Question about DMDA BOUNDARY_CONDITION set In-Reply-To: References: Message-ID: In the git repository branch barry/add-dmda-rotate-boundary-conditions-example I have provided an example that does this. It uses DM_BOUNDARY_GHOSTED to reserve ghost locations along the physical boundaries in the local vector and then creates two VecScatter that fill the appropriate locations in these ghosted regions from the "providing location". It is in src/dm/examples/tests/ex6.c Note that in computational work you will call both VecScatters and the DMGlobalToLocalBegin/End to update both the extra physical ghost points and the regular ghost points between MPI processes. Please let me know if you have any trouble with it. Barry > On Mar 20, 2017, at 9:39 AM, Wenbo Zhao wrote: > > Hi all. > > I have a mesh is like below > > 1 2 3 > 4 5 6 > 7 8 9 > > I use DACreate2d to create mesh partition. > > In my case, I have an rotation boundary condition. The whole mesh is like below > > 9 8 7 3 6 9 > 6 5 4 2 5 8 > 3 2 1 1 4 7 > > 7 4 1 1 2 3 > 8 5 2 4 5 6 > 9 6 3 7 8 9 > > It means that cell 2 near the top boundary are connected with cell 4 near the left boundary, cell 3 with cell 7. > > How can I set boundary condition or set my matrix? > > I am looking forward for your help! > > BEST, > > Wenbo From bsmith at mcs.anl.gov Wed Mar 22 18:15:12 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 22 Mar 2017 18:15:12 -0500 Subject: [petsc-users] [petsc-maint] Question about DMDA BOUNDARY_CONDITION set In-Reply-To: References: Message-ID: > On Mar 21, 2017, at 8:46 AM, Wenbo Zhao wrote: > > Matt, > > Thanks. > > I want to solve neutron diffusion equations using finite difference method and PETSc. > This rotation boundary condition is very common in my cases. > Though the mesh consists of ~ 10 Miliion structured hexahedron cells, the mesh is simple and could be discribed by three vectors about x, y and z axis. > It is appropriate for DMDA except boundary condition. > > I wanted to make mesh partition like DMDA by hand. Then I need to create matrix and vector and assemble matrix, and et al. I thought it was an easy work. > As you say, it's not. > > As a newer, I can use DACreate2d to begin. It's OK. > But finally, it does need this optimization. > > Though I read the manual about the vector and matrix, I am not clear about the basic idea behind the code. > How can I create a matrix and vector as my mesh partition and create the map between the nature ordering and the PETSc ordering in global vector? When using DMDA you generally don't need to worry about the mapping between natural numbering and PETSc ordering. All the indexing you write can take place in the natural ordering. Look at src/snes/examples/tutorials/ex18.c a diffusion-like problem and look at FormFunction(). This example is much like what you need except you need the little extra code I sent you to update the "rotated boundary conditions" locations. > How does vector communicate in the operation of matrix multi vector? Does it achieved automatically? Yes all this is done automatically. > > BETS, > > Wenbo > > > > > From jed at jedbrown.org Wed Mar 22 20:09:23 2017 From: jed at jedbrown.org (Jed Brown) Date: Wed, 22 Mar 2017 19:09:23 -0600 Subject: [petsc-users] CMake and PETSc In-Reply-To: References: Message-ID: <87lgrw23zw.fsf@jedbrown.org> Hom Nath Gharti writes: > Dear all, > > Does FindPETSc.cmake (https://github.com/jedbrown/cmake-modules) work > with Fortran as well? It should, but you need to be sure to use a compatible Fortran compiler. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From hng.email at gmail.com Wed Mar 22 20:23:12 2017 From: hng.email at gmail.com (Hom Nath Gharti) Date: Wed, 22 Mar 2017 21:23:12 -0400 Subject: [petsc-users] CMake and PETSc In-Reply-To: <87lgrw23zw.fsf@jedbrown.org> References: <87lgrw23zw.fsf@jedbrown.org> Message-ID: Thanks, Jed! I will try. I see that FindPETSc.cmake has following lines: set(PETSC_VALID_COMPONENTS C CXX) Should we add FC or similar? Thanks, Hom On Wed, Mar 22, 2017 at 9:09 PM, Jed Brown wrote: > Hom Nath Gharti writes: > >> Dear all, >> >> Does FindPETSc.cmake (https://github.com/jedbrown/cmake-modules) work >> with Fortran as well? > > It should, but you need to be sure to use a compatible Fortran compiler. From C.Klaij at marin.nl Thu Mar 23 03:42:48 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 23 Mar 2017 08:42:48 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl>, Message-ID: <1490258567983.93394@marin.nl> Matt, Lawrence The same problem happens when using gmres with rtol 1e-6 in the schur complement (attachment "left_schur"). I'm not sure what this tells us. If I understand Lawrence correctly, the null space may be attached to the wrong matrix (A11 instead of Sp)? Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl | www.marin.nl [LinkedIn] [YouTube] [Twitter] [Facebook] MARIN news: Project Manager Veiligheids- en Verkeersstudies en Specialist Human Performance ________________________________ From: Matthew Knepley Sent: Wednesday, March 22, 2017 5:39 PM To: Klaij, Christiaan Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space On Wed, Mar 22, 2017 at 4:04 PM, Klaij, Christiaan > wrote: Thanks Matt, I will try your suggestion and let you know. In the meantime this is what I did to set the constant null space: call MatNullSpaceCreate(MPI_COMM_WORLD,PETSC_TRUE,0,PETSC_NULL_OBJECT,nullsp,ierr); CHKERRQ(ierr) call MatSetNullSpace(aa_sub(4),nullsp,ierr); CHKERRQ(ierr) call MatNullSpaceDestroy(nullsp,ierr); CHKERRQ(ierr) where aa_sub(4) corresponds to A11. This is called before begin/end mat assembly. Hmm, I wonder if the problem is that A11 has the nullspace, but the PC pmat is actually B^T diag(A)^{-1} B. I have to look where the solver is actually taking the nullspace from. We need to improve the -ksp_view output to make this stuff much easier to see. Matt Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl | www.marin.nl [LinkedIn] [YouTube] [Twitter] [Facebook] MARIN news: Software seminar in Shanghai for the first time, March 28 ________________________________ From: Matthew Knepley > Sent: Wednesday, March 22, 2017 4:47 PM To: Klaij, Christiaan Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space On Wed, Mar 22, 2017 at 2:58 PM, Klaij, Christiaan > wrote: I'm solving the Navier-Stokes equations using PCFieldSplit type Schur and Selfp. This particular case has only Neumann conditions for the pressure field. With left preconditioning and no nullspace, I see that the KSP solver for S does not converge (attachment "left_nonullsp") in either norm. When I attach the constant null space to A11, it gets passed on to S and the KSP solver for S does converge in the preconditioned norm only (attachment "left"). However, right preconditioning uses the unpreconditioned norm and therefore doesn't converge (attachment "right"), regardless of whether the nullspace is attached or not. Should I conclude that right preconditioning cannot be used in combination with a null space? No, neither of your solves is working. The left preconditioned version is just hiding that fact. You should start by checking that the exact solve works. Namely full Schur factorization with exact solves for A and S. Since you do not have a matrix for S (unless you tell it do use "full"), just use a very low (1e-10) tolerance. My guess is that something is off with your null space specification. Thanks, Matt Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/H2020-marinergi-project-launched.htm -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagef3a480.PNG Type: image/png Size: 331 bytes Desc: imagef3a480.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image9d5515.PNG Type: image/png Size: 333 bytes Desc: image9d5515.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image38eecd.PNG Type: image/png Size: 293 bytes Desc: image38eecd.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagec62fb8.PNG Type: image/png Size: 253 bytes Desc: imagec62fb8.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagebda5f2.PNG Type: image/png Size: 293 bytes Desc: imagebda5f2.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image41c9af.PNG Type: image/png Size: 331 bytes Desc: image41c9af.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagec69863.PNG Type: image/png Size: 333 bytes Desc: imagec69863.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imagefefa51.PNG Type: image/png Size: 253 bytes Desc: imagefefa51.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: left_schur Type: application/octet-stream Size: 13478 bytes Desc: left_schur URL: From egorow.walentin at gmail.com Thu Mar 23 05:43:59 2017 From: egorow.walentin at gmail.com (=?UTF-8?B?0JLQsNC70LXQvdGC0LjQvSDQldCz0L7RgNC+0LI=?=) Date: Thu, 23 Mar 2017 13:43:59 +0300 Subject: [petsc-users] About error Message-ID: Hello! My name is Valentin Egorov. I am from Russia. And I have a question for you about PETSC. You recommended to add use petscXXXX to .#nclude "petsc/finclude/petscXXX.h", and for XXX variablename - type(tXXX) variablename for Fortran. However, I have "Error in opening the Library module file." What's the problem? Maybe, path is wrong? Sincerely, Valentin Egorov! -------------- next part -------------- An HTML attachment was scrubbed... URL: From lawrence.mitchell at imperial.ac.uk Thu Mar 23 05:57:09 2017 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Thu, 23 Mar 2017 10:57:09 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <1490258567983.93394@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> Message-ID: <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> On 23/03/17 08:42, Klaij, Christiaan wrote: > Matt, Lawrence > > > The same problem happens when using gmres with rtol 1e-6 in the > schur complement (attachment "left_schur"). I'm not sure what > this tells us. If I understand Lawrence correctly, the null space > may be attached to the wrong matrix (A11 instead of Sp)? I think I misread the code. Because you can only attach nullspaces to either Amat or Pmat, you can't control the nullspace for (say) Amat[1,1] or Pmat[1,1] because MatCreateSubMatrix doesn't know anything about nullspaces. So the steps inside pcfieldsplit are: createsubmatrices(Amat) -> A, B, C, D setup schur matrix S <= D - C A^{-1} B Transfer nullspaces onto S. How to transfer the nullspaces? Well, as mentioned, I can't put anything on the submatrices (because I have no way of accessing them). So instead, I need to hang the nullspace on the IS that defines the S block: So if you have: is0, is1 You do: PetscObjectCompose((PetscObject)is1, "nullspace", nullspace); Before going into the preconditioner. If you're doing this through a DM, then DMCreateSubDM controls the transfer of nullspaces, the default implementation DTRT in the case of sections. See DMCreateSubDM_Section_Private. Clearer? Lawrence -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From knepley at gmail.com Thu Mar 23 06:51:36 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 23 Mar 2017 11:51:36 +0000 Subject: [petsc-users] About error In-Reply-To: References: Message-ID: On Thu, Mar 23, 2017 at 10:43 AM, ???????? ?????? wrote: > Hello! > My name is Valentin Egorov. I am from Russia. And I have a question for > you about PETSC. > > You recommended to add use petscXXXX to .#nclude > "petsc/finclude/petscXXX.h", and for XXX variablename - type(tXXX) > variablename for Fortran. > However, I have "Error in opening the Library module file." What's the > problem? Maybe, path is wrong? > The modules are made during the build. Have you tried compiling a Fortran example? cd src/ksp/ksp/examples/tutorials make ex2f Thanks, Matt > Sincerely, Valentin Egorov! > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Thu Mar 23 10:37:01 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Thu, 23 Mar 2017 15:37:01 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl>, <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> Message-ID: <1490283421032.72098@marin.nl> Lawrence, Yes, that's clearer, thanks! I do have is0 and is1 so I can try PetscObjectCompose and let you know. Note though that the viewer reports that both S and A11 have a null space attached... My matrix is a matnest and I've attached a null space to A11, so the latter works as expected. But is the viewer wrong for S? Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/Project-Manager-Veiligheids-en-Verkeersstudies-en-Specialist-Human-Performance.htm ________________________________________ From: Lawrence Mitchell Sent: Thursday, March 23, 2017 11:57 AM To: Klaij, Christiaan; Matthew Knepley Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space On 23/03/17 08:42, Klaij, Christiaan wrote: > Matt, Lawrence > > > The same problem happens when using gmres with rtol 1e-6 in the > schur complement (attachment "left_schur"). I'm not sure what > this tells us. If I understand Lawrence correctly, the null space > may be attached to the wrong matrix (A11 instead of Sp)? I think I misread the code. Because you can only attach nullspaces to either Amat or Pmat, you can't control the nullspace for (say) Amat[1,1] or Pmat[1,1] because MatCreateSubMatrix doesn't know anything about nullspaces. So the steps inside pcfieldsplit are: createsubmatrices(Amat) -> A, B, C, D setup schur matrix S <= D - C A^{-1} B Transfer nullspaces onto S. How to transfer the nullspaces? Well, as mentioned, I can't put anything on the submatrices (because I have no way of accessing them). So instead, I need to hang the nullspace on the IS that defines the S block: So if you have: is0, is1 You do: PetscObjectCompose((PetscObject)is1, "nullspace", nullspace); Before going into the preconditioner. If you're doing this through a DM, then DMCreateSubDM controls the transfer of nullspaces, the default implementation DTRT in the case of sections. See DMCreateSubDM_Section_Private. Clearer? Lawrence From lawrence.mitchell at imperial.ac.uk Thu Mar 23 10:52:52 2017 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Thu, 23 Mar 2017 15:52:52 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <1490283421032.72098@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl> Message-ID: <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> On 23/03/17 15:37, Klaij, Christiaan wrote: > Yes, that's clearer, thanks! I do have is0 and is1 so I can try > PetscObjectCompose and let you know. > > Note though that the viewer reports that both S and A11 have a > null space attached... My matrix is a matnest and I've attached a > null space to A11, so the latter works as expected. But is the viewer > wrong for S? No, I think this is a consequence of using a matnest and attaching a nullspace to A11. In that case you sort of "can" set a nullspace on the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because you just get a reference. But if you switched to AIJ then you would no longer get this. So it happens that the nullspace you set on A11 /is/ transferred over to S, but this is luck, rather than design. So maybe there is something else wrong. Perhaps you can run with -fieldsplit_1_ksp_test_null_space to check the nullspace matches correctly? Lawrence -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From fande.kong at inl.gov Thu Mar 23 11:22:08 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Thu, 23 Mar 2017 10:22:08 -0600 Subject: [petsc-users] coloring algorithms Message-ID: Hi All, I was wondering if the coloring approaches listed online are working? Which ones are in parallel, and which ones are in sequential? http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatColoringType.html#MatColoringType If the coloring is in parallel, can it be used with the finite difference to compute the Jacobian? Any limitations? Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Thu Mar 23 11:50:41 2017 From: hzhang at mcs.anl.gov (Hong) Date: Thu, 23 Mar 2017 11:50:41 -0500 Subject: [petsc-users] coloring algorithms In-Reply-To: References: Message-ID: > > Fande, > > I was wondering if the coloring approaches listed online are working? > Which ones are in parallel, and which ones are in sequential? > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/ > MatColoringType.html#MatColoringType > JP and Greedy are parallel. > > If the coloring is in parallel, can it be used with the finite difference > to compute the Jacobian? Any limitations? > Yes, they work quite well. Git it a try. Let us know if you encounter any problem. Hong -------------- next part -------------- An HTML attachment was scrubbed... URL: From kandanovian at gmail.com Thu Mar 23 11:57:31 2017 From: kandanovian at gmail.com (Tim Steinhoff) Date: Thu, 23 Mar 2017 17:57:31 +0100 Subject: [petsc-users] Update to MUMPS 5.1.1 Message-ID: Hi all, there is new version of MUMPS with some nice improvements: http://mumps.enseeiht.fr/index.php?page=dwnld#cl Is it possible to update the package in the petsc repository? Thanks and kind regards, Volker From aherrema at iastate.edu Thu Mar 23 12:14:19 2017 From: aherrema at iastate.edu (Austin Herrema) Date: Thu, 23 Mar 2017 12:14:19 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <51713281-854F-4E7A-8438-AA8F453B5CA5@dsic.upv.es> Message-ID: Sorry to self-bump this. Sounds like there are no quick and easy answers to get this solved, which I understand. I just wanted to say that if anybody has any suggestions about what I should be looking into specifically (C compiler? f2py? etc...) then even that would be helpful as I'm pretty stuck on this. And I have also posted a discussion on StackOverflow ( http://stackoverflow.com/questions/42978049/unable-to-use-f2py-to-link-large-petsc-slepc-fortran-code?noredirect=1#comment73055943_42978049) in case anything there might be useful (it's mostly all the same). Thank you regardless! Austin On Wed, Mar 22, 2017 at 2:08 PM, Austin Herrema wrote: > Makes sense, and definitely seems to be a more natural way to go now that > I see it. > > When compiling using this rule it seems to get close but doesn't compile > all the way. Here is the output (in reality, what I was referring to as > "modname.so" is "iga_blade_py.so" and the "outer_driver.f90" is called > merely "run_analysis.f90"--sorry for the confusion): > > running build > running config_cc > unifing config_cc, config, build_clib, build_ext, build commands > --compiler options > running config_fc > unifing config_fc, config, build_clib, build_ext, build commands > --fcompiler options > running build_src > build_src > building extension "iga_blade_py" sources > f2py options: [] > f2py:> /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7/iga_blade_pymodule.c > creating /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7 > Reading fortran codes... > Reading file 'run_analysis.f90' (format:free) > Post-processing... > Block: iga_blade_py > Block: run_analysis > Post-processing (stage 2)... > Building modules... > Building module "iga_blade_py"... > Constructing wrapper function "run_analysis"... > run_analysis() > Wrote C/API module "iga_blade_py" to file "/tmp/tmpIH70ZJ/src.macosx-10. > 10-x86_64-2.7/iga_blade_pymodule.c" > adding '/tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7/fortranobject.c' to > sources. > adding '/tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7' to include_dirs. > copying /usr/local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.c > -> /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7 > copying /usr/local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.h > -> /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7 > build_src: building npy-pkg config files > running build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize Gnu95FCompiler > Found executable /usr/local/bin/gfortran > customize Gnu95FCompiler > customize Gnu95FCompiler using build_ext > building 'iga_blade_py' extension > compiling C sources > C compiler: clang -fno-strict-aliasing -fno-common -dynamic -g -O2 > -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes > > error: unknown file type '' (from '-Wl,-rpath,/usr/local/Cellar/ > slepc/3.7.3_4/real/lib') > make: *** [iga_blade_py.so] Error 1 > > On Wed, Mar 22, 2017 at 1:39 PM, Jose E. Roman wrote: > >> >> > El 22 mar 2017, a las 19:23, Barry Smith escribi?: >> > >> > >> >> On Mar 22, 2017, at 1:08 PM, Austin Herrema >> wrote: >> >> >> >> Thank you for the suggestion! Seems like a reasonable way to go. Not >> working for me, however, I suspect because I'm using homebrew installations >> of PETSc and SLEPc (I don't think all the makefiles are kept). Any other >> way to do the same thing by chance? Worst case I could use a non-homebrew >> installation but I'd prefer not to mess with that if I can help it... >> > >> > How do you link a "regular" SLEPc C program using the home-brew >> libraries? You need basically the same link line for f2py as you need for C >> programs. >> >> >> What Barry may be suggesting is: instead of using a script to invoke >> f2py, add a rule to your makefile >> >> modname.so: outer_driver.f90 >> f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o >> ${SLEPC_EPS_LIB} >> >> Then 'make modname.so' will pick the libraries from SLEPc makefiles. >> >> Jose >> >> > >> >> >> >> Thanks, >> >> Austin >> >> >> >> On Wed, Mar 22, 2017 at 11:20 AM Jose E. Roman >> wrote: >> >> Try the following: >> >> $ cd $SLEPC_DIR >> >> $ make getlinklibs_slepc >> >> Then copy the output and paste it at the end of your f2py command. >> >> >> >> Jose >> >> >> >> >> >>> El 22 mar 2017, a las 16:38, Austin Herrema >> escribi?: >> >>> >> >>> Hello all, >> >>> >> >>> I am trying to do as the subject line describes--use f2py to run a >> large PETSc/SLEPc fortran finite element code through python. I really only >> need to wrap the outermost function of the fortran code--don't need any >> access to subroutines. I'll describe what I'm doing, some of which I'm not >> 100% confident is correct (not much experience with f2py)--feel free to >> correct/redirect any of it. >> >>> >> >>> First, I'm editing the fortran code so that the top-level function is >> a subroutine rather than a main program (it's my understanding that this is >> required for f2py?). >> >>> >> >>> I use my regular makefile (modeled after a standard SLEPc makefile >> from the user guide) to compile all of the .f90/.F90 files (many of them) >> to .o files using SLEPc/PETSc rules. The final linking phase fails since >> there isn't a main program, but I'm just ignoring that for now since that's >> not what I ultimately need... >> >>> >> >>> Using a python script, I set up and run the f2py command. Right now >> it has the form... >> >>> "f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o..." etc. >> >>> >> >>> This appears to work, but upon attempting to import, it cannot find >> the SLEPc (and, I presume, PETSc) objects/functions: >> >>> >> >>>>>> import mod_name >> >>> Traceback (most recent call last): >> >>> File "", line 1, in >> >>> ImportError: dlopen(./mod_name.so, 2): Symbol not found: _epscreate_ >> >>> Referenced from: ./mod_name.so >> >>> Expected in: flat namespace >> >>> in ./mod_name.so >> >>> >> >>> Based on this discussion, I believe I need to somehow include >> PETSc/SLEPc info when linking with f2py. Is that correct? Any direction on >> how to do that? I don't quite understand what the OP of that question >> ultimately ended up doing to get it to work. I tried using the -I flag >> pointing to the slepc_common file (like the SLEPc makefile does). The >> problem is that that is a file, not a directory, which contains a number of >> other makefile-style variables--so it works to include it in a makefile, >> but doesn't work in python. Maybe there are only a few directories I really >> need to include? Or is it possible to somehow run f2py through a makefile? >> I'm a bit ignorant in that realm as well. >> >>> >> >>> Thank you for any help or suggestions! >> >>> Austin >> >>> >> >>> >> >>> -- >> >>> Austin Herrema >> >>> PhD Student | Graduate Research Assistant | Iowa State University >> >>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >> >> >> >> -- >> >> Austin Herrema >> >> PhD Student | Graduate Research Assistant | Iowa State University >> >> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >> > >> >> > > > -- > *Austin Herrema* > PhD Student | Graduate Research Assistant | Iowa State University > Wind Energy Science, Engineering, and Policy | Mechanical Engineering > -- *Austin Herrema* PhD Student | Graduate Research Assistant | Iowa State University Wind Energy Science, Engineering, and Policy | Mechanical Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetank at gmail.com Thu Mar 23 12:55:52 2017 From: gaetank at gmail.com (Gaetan Kenway) Date: Thu, 23 Mar 2017 13:55:52 -0400 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <51713281-854F-4E7A-8438-AA8F453B5CA5@dsic.upv.es> Message-ID: We do this all the time. The trick is that you can't use f2py do actually do *any* of the compiling/linking for you. You just have to use it to get the module.c file and .f90 file. Then compile that yourself and use whatever compile flags, linking etc you would normally use to get a .so. We've had issues with f2py using gcc to do the final link, instead of the fortran compiler. The following approach we've used on a half a dozen different wrapped fortran codes on a half a dozen different computers/clusters and it has proven to work much more robustly than using f2py directly. Hope this helps Gaetan # Generate Python include directory $(eval PYTHON_INCLUDES = $(shell $(PYTHON-CONFIG) --includes)) @ echo "#------------------------------------------------------#" @echo Python Inclue Flags $(PYTHON_INCLUDES) @echo "#------------------------------------------------------#" # Generate Numpy inlude directory $(eval NUMPY_INCLUDES = $(shell $(PYTHON) -c 'import numpy; print numpy.get_include()')) @echo "#------------------------------------------------------#" @echo Numpy Include Directory: $(NUMPY_INCLUDES) @echo "#------------------------------------------------------#" # Generate f2py root directory $(eval F2PY_ROOT = $(shell $(PYTHON) get_f2py.py)) @echo "#------------------------------------------------------#" @echo f2py root directory: $(F2PY_ROOT) @echo "#------------------------------------------------------#" f2py warpustruct.pyf @echo " " $(CC) $(CC_ALL_FLAGS) $(PYTHON_INCLUDES) -I$(NUMPY_INCLUDES) \ -I$(F2PY_ROOT)/src -c warpustructmodule.c $(CC) $(CC_ALL_FLAGS) $(PYTHON_INCLUDES) -I$(NUMPY_INCLUDES) -c \ $(F2PY_ROOT)/src/fortranobject.c -o fortranobject.o # Compiled f2py wrapper file $(FF90) $(FF90_ALL_FLAGS) -I$(MAIN_DIR)/mod -c warpustruct-f2pywrappers2.f90 # Final LInk enter code here $(FF90) -shared $(PYTHON_OBJECTS) $(LINKER_ALL_FLAGS) -o warpustruct.so On Thu, Mar 23, 2017 at 1:14 PM, Austin Herrema wrote: > Sorry to self-bump this. Sounds like there are no quick and easy answers > to get this solved, which I understand. I just wanted to say that if > anybody has any suggestions about what I should be looking into > specifically (C compiler? f2py? etc...) then even that would be helpful as > I'm pretty stuck on this. And I have also posted a discussion on > StackOverflow (http://stackoverflow.com/questions/42978049/unable-to- > use-f2py-to-link-large-petsc-slepc-fortran-code?noredirect= > 1#comment73055943_42978049) in case anything there might be useful (it's > mostly all the same). > > Thank you regardless! > Austin > > On Wed, Mar 22, 2017 at 2:08 PM, Austin Herrema > wrote: > >> Makes sense, and definitely seems to be a more natural way to go now that >> I see it. >> >> When compiling using this rule it seems to get close but doesn't compile >> all the way. Here is the output (in reality, what I was referring to as >> "modname.so" is "iga_blade_py.so" and the "outer_driver.f90" is called >> merely "run_analysis.f90"--sorry for the confusion): >> >> running build >> running config_cc >> unifing config_cc, config, build_clib, build_ext, build commands >> --compiler options >> running config_fc >> unifing config_fc, config, build_clib, build_ext, build commands >> --fcompiler options >> running build_src >> build_src >> building extension "iga_blade_py" sources >> f2py options: [] >> f2py:> /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7/iga_blade_pymodule.c >> creating /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7 >> Reading fortran codes... >> Reading file 'run_analysis.f90' (format:free) >> Post-processing... >> Block: iga_blade_py >> Block: run_analysis >> Post-processing (stage 2)... >> Building modules... >> Building module "iga_blade_py"... >> Constructing wrapper function "run_analysis"... >> run_analysis() >> Wrote C/API module "iga_blade_py" to file "/tmp/tmpIH70ZJ/src.macosx-10. >> 10-x86_64-2.7/iga_blade_pymodule.c" >> adding '/tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7/fortranobject.c' to >> sources. >> adding '/tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7' to include_dirs. >> copying /usr/local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.c >> -> /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7 >> copying /usr/local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.h >> -> /tmp/tmpIH70ZJ/src.macosx-10.10-x86_64-2.7 >> build_src: building npy-pkg config files >> running build_ext >> customize UnixCCompiler >> customize UnixCCompiler using build_ext >> customize Gnu95FCompiler >> Found executable /usr/local/bin/gfortran >> customize Gnu95FCompiler >> customize Gnu95FCompiler using build_ext >> building 'iga_blade_py' extension >> compiling C sources >> C compiler: clang -fno-strict-aliasing -fno-common -dynamic -g -O2 >> -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes >> >> error: unknown file type '' (from '-Wl,-rpath,/usr/local/Cellar/ >> slepc/3.7.3_4/real/lib') >> make: *** [iga_blade_py.so] Error 1 >> >> On Wed, Mar 22, 2017 at 1:39 PM, Jose E. Roman >> wrote: >> >>> >>> > El 22 mar 2017, a las 19:23, Barry Smith >>> escribi?: >>> > >>> > >>> >> On Mar 22, 2017, at 1:08 PM, Austin Herrema >>> wrote: >>> >> >>> >> Thank you for the suggestion! Seems like a reasonable way to go. Not >>> working for me, however, I suspect because I'm using homebrew installations >>> of PETSc and SLEPc (I don't think all the makefiles are kept). Any other >>> way to do the same thing by chance? Worst case I could use a non-homebrew >>> installation but I'd prefer not to mess with that if I can help it... >>> > >>> > How do you link a "regular" SLEPc C program using the home-brew >>> libraries? You need basically the same link line for f2py as you need for C >>> programs. >>> >>> >>> What Barry may be suggesting is: instead of using a script to invoke >>> f2py, add a rule to your makefile >>> >>> modname.so: outer_driver.f90 >>> f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o >>> ${SLEPC_EPS_LIB} >>> >>> Then 'make modname.so' will pick the libraries from SLEPc makefiles. >>> >>> Jose >>> >>> > >>> >> >>> >> Thanks, >>> >> Austin >>> >> >>> >> On Wed, Mar 22, 2017 at 11:20 AM Jose E. Roman >>> wrote: >>> >> Try the following: >>> >> $ cd $SLEPC_DIR >>> >> $ make getlinklibs_slepc >>> >> Then copy the output and paste it at the end of your f2py command. >>> >> >>> >> Jose >>> >> >>> >> >>> >>> El 22 mar 2017, a las 16:38, Austin Herrema >>> escribi?: >>> >>> >>> >>> Hello all, >>> >>> >>> >>> I am trying to do as the subject line describes--use f2py to run a >>> large PETSc/SLEPc fortran finite element code through python. I really only >>> need to wrap the outermost function of the fortran code--don't need any >>> access to subroutines. I'll describe what I'm doing, some of which I'm not >>> 100% confident is correct (not much experience with f2py)--feel free to >>> correct/redirect any of it. >>> >>> >>> >>> First, I'm editing the fortran code so that the top-level function >>> is a subroutine rather than a main program (it's my understanding that this >>> is required for f2py?). >>> >>> >>> >>> I use my regular makefile (modeled after a standard SLEPc makefile >>> from the user guide) to compile all of the .f90/.F90 files (many of them) >>> to .o files using SLEPc/PETSc rules. The final linking phase fails since >>> there isn't a main program, but I'm just ignoring that for now since that's >>> not what I ultimately need... >>> >>> >>> >>> Using a python script, I set up and run the f2py command. Right now >>> it has the form... >>> >>> "f2py -c -m modname outer_driver.f90 file1.o file2.o file3.o..." etc. >>> >>> >>> >>> This appears to work, but upon attempting to import, it cannot find >>> the SLEPc (and, I presume, PETSc) objects/functions: >>> >>> >>> >>>>>> import mod_name >>> >>> Traceback (most recent call last): >>> >>> File "", line 1, in >>> >>> ImportError: dlopen(./mod_name.so, 2): Symbol not found: _epscreate_ >>> >>> Referenced from: ./mod_name.so >>> >>> Expected in: flat namespace >>> >>> in ./mod_name.so >>> >>> >>> >>> Based on this discussion, I believe I need to somehow include >>> PETSc/SLEPc info when linking with f2py. Is that correct? Any direction on >>> how to do that? I don't quite understand what the OP of that question >>> ultimately ended up doing to get it to work. I tried using the -I flag >>> pointing to the slepc_common file (like the SLEPc makefile does). The >>> problem is that that is a file, not a directory, which contains a number of >>> other makefile-style variables--so it works to include it in a makefile, >>> but doesn't work in python. Maybe there are only a few directories I really >>> need to include? Or is it possible to somehow run f2py through a makefile? >>> I'm a bit ignorant in that realm as well. >>> >>> >>> >>> Thank you for any help or suggestions! >>> >>> Austin >>> >>> >>> >>> >>> >>> -- >>> >>> Austin Herrema >>> >>> PhD Student | Graduate Research Assistant | Iowa State University >>> >>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>> >> >>> >> -- >>> >> Austin Herrema >>> >> PhD Student | Graduate Research Assistant | Iowa State University >>> >> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>> > >>> >>> >> >> >> -- >> *Austin Herrema* >> PhD Student | Graduate Research Assistant | Iowa State University >> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >> > > > > -- > *Austin Herrema* > PhD Student | Graduate Research Assistant | Iowa State University > Wind Energy Science, Engineering, and Policy | Mechanical Engineering > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: get_f2py.py Type: text/x-python Size: 565 bytes Desc: not available URL: From balay at mcs.anl.gov Thu Mar 23 13:28:39 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 23 Mar 2017 13:28:39 -0500 Subject: [petsc-users] Update to MUMPS 5.1.1 In-Reply-To: References: Message-ID: On Thu, 23 Mar 2017, Tim Steinhoff wrote: > Hi all, > > there is new version of MUMPS with some nice improvements: > http://mumps.enseeiht.fr/index.php?page=dwnld#cl > Is it possible to update the package in the petsc repository? I have the changes in git branch 'balay/update-mumps-5.1.1'. This change will go into petsc 'master' branch - after testing. The following might work with your current petsc-3.7 or 'master' sources. --download-mumps=1 --download-mumps-commit=v5.1.1-p1 Satish From fande.kong at inl.gov Thu Mar 23 16:57:08 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Thu, 23 Mar 2017 15:57:08 -0600 Subject: [petsc-users] coloring algorithms In-Reply-To: References: Message-ID: Thanks, Hong, I did some tests with a matrix (40x40): *row 0: (0, 1.) (2, 1.) (3, 1.) (11, 1.) (14, 1.) (15, 1.) (19, 1.) (22, 1.) (23, 1.) (24, 1.) (27, 1.) (28, 1.) row 1: (1, 1.) (2, 1.) (3, 1.) (6, 1.) (16, 1.) (17, 1.) (18, 1.) (21, 1.) (33, 1.) row 2: (0, 1.) (1, 1.) (2, 1.) (3, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (16, 1.) (19, 1.) (20, 1.) row 3: (0, 1.) (1, 1.) (2, 1.) (3, 1.) (5, 1.) (11, 1.) (18, 1.) (19, 1.) (21, 1.) (22, 1.) (31, 1.) (33, 1.) row 4: (4, 1.) (14, 1.) (15, 1.) (19, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) (38, 1.) row 5: (3, 1.) (5, 1.) (11, 1.) (17, 1.) (22, 1.) (26, 1.) (31, 1.) (32, 1.) (33, 1.) (34, 1.) row 6: (1, 1.) (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (16, 1.) (20, 1.) (25, 1.) (30, 1.) row 7: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (13, 1.) (17, 1.) (20, 1.) (32, 1.) (34, 1.) row 8: (8, 1.) (9, 1.) (12, 1.) (13, 1.) (26, 1.) (29, 1.) (30, 1.) (36, 1.) (38, 1.) (39, 1.) row 9: (2, 1.) (6, 1.) (7, 1.) (8, 1.) (9, 1.) (10, 1.) (13, 1.) (16, 1.) (17, 1.) (20, 1.) (25, 1.) (30, 1.) (34, 1.) row 10: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (19, 1.) (20, 1.) (29, 1.) (32, 1.) (34, 1.) row 11: (0, 1.) (3, 1.) (5, 1.) (11, 1.) (12, 1.) (14, 1.) (15, 1.) (19, 1.) (22, 1.) (23, 1.) (26, 1.) (27, 1.) (31, 1.) row 12: (8, 1.) (11, 1.) (12, 1.) (13, 1.) (15, 1.) (22, 1.) (23, 1.) (26, 1.) (27, 1.) (35, 1.) (36, 1.) (39, 1.) row 13: (7, 1.) (8, 1.) (9, 1.) (12, 1.) (13, 1.) (17, 1.) (23, 1.) (26, 1.) (30, 1.) (34, 1.) (35, 1.) (36, 1.) row 14: (0, 1.) (4, 1.) (11, 1.) (14, 1.) (15, 1.) (19, 1.) (21, 1.) (23, 1.) (24, 1.) (25, 1.) (28, 1.) (38, 1.) row 15: (0, 1.) (4, 1.) (11, 1.) (12, 1.) (14, 1.) (15, 1.) (18, 1.) (21, 1.) (23, 1.) (25, 1.) (27, 1.) (28, 1.) (35, 1.) (36, 1.) row 16: (1, 1.) (2, 1.) (6, 1.) (9, 1.) (16, 1.) (18, 1.) (21, 1.) (25, 1.) (30, 1.) row 17: (1, 1.) (5, 1.) (7, 1.) (9, 1.) (13, 1.) (17, 1.) (18, 1.) (21, 1.) (31, 1.) (33, 1.) (34, 1.) (35, 1.) (36, 1.) row 18: (1, 1.) (3, 1.) (15, 1.) (16, 1.) (17, 1.) (18, 1.) (21, 1.) (23, 1.) (31, 1.) (33, 1.) (35, 1.) (36, 1.) row 19: (0, 1.) (2, 1.) (3, 1.) (4, 1.) (10, 1.) (11, 1.) (14, 1.) (19, 1.) (20, 1.) (24, 1.) (29, 1.) (32, 1.) (38, 1.) row 20: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (19, 1.) (20, 1.) row 21: (1, 1.) (3, 1.) (14, 1.) (15, 1.) (16, 1.) (17, 1.) (18, 1.) (21, 1.) (23, 1.) (25, 1.) (28, 1.) (30, 1.) (33, 1.) (35, 1.) row 22: (0, 1.) (3, 1.) (5, 1.) (11, 1.) (12, 1.) (22, 1.) (26, 1.) (27, 1.) (31, 1.) (32, 1.) (33, 1.) (34, 1.) row 23: (0, 1.) (11, 1.) (12, 1.) (13, 1.) (14, 1.) (15, 1.) (18, 1.) (21, 1.) (23, 1.) (27, 1.) (35, 1.) (36, 1.) row 24: (0, 1.) (4, 1.) (14, 1.) (19, 1.) (24, 1.) (25, 1.) (28, 1.) (29, 1.) (30, 1.) (37, 1.) (38, 1.) row 25: (4, 1.) (6, 1.) (9, 1.) (14, 1.) (15, 1.) (16, 1.) (21, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) row 26: (5, 1.) (8, 1.) (11, 1.) (12, 1.) (13, 1.) (22, 1.) (26, 1.) (27, 1.) (29, 1.) (32, 1.) (39, 1.) row 27: (0, 1.) (11, 1.) (12, 1.) (15, 1.) (22, 1.) (23, 1.) (26, 1.) (27, 1.) (35, 1.) (36, 1.) row 28: (0, 1.) (4, 1.) (14, 1.) (15, 1.) (21, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) row 29: (8, 1.) (10, 1.) (19, 1.) (24, 1.) (26, 1.) (29, 1.) (32, 1.) (34, 1.) (38, 1.) (39, 1.) row 30: (4, 1.) (6, 1.) (8, 1.) (9, 1.) (13, 1.) (16, 1.) (21, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) (38, 1.) row 31: (3, 1.) (5, 1.) (11, 1.) (17, 1.) (18, 1.) (22, 1.) (31, 1.) (33, 1.) (34, 1.) row 32: (5, 1.) (7, 1.) (10, 1.) (19, 1.) (22, 1.) (26, 1.) (29, 1.) (32, 1.) (34, 1.) (39, 1.) row 33: (1, 1.) (3, 1.) (5, 1.) (17, 1.) (18, 1.) (21, 1.) (22, 1.) (31, 1.) (33, 1.) (34, 1.) (35, 1.) row 34: (5, 1.) (7, 1.) (9, 1.) (10, 1.) (13, 1.) (17, 1.) (22, 1.) (29, 1.) (31, 1.) (32, 1.) (33, 1.) (34, 1.) row 35: (12, 1.) (13, 1.) (15, 1.) (17, 1.) (18, 1.) (21, 1.) (23, 1.) (27, 1.) (33, 1.) (35, 1.) (36, 1.) row 36: (8, 1.) (12, 1.) (13, 1.) (15, 1.) (17, 1.) (18, 1.) (23, 1.) (27, 1.) (35, 1.) (36, 1.) row 37: (4, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) (38, 1.) row 38: (4, 1.) (8, 1.) (14, 1.) (19, 1.) (24, 1.) (29, 1.) (30, 1.) (37, 1.) (38, 1.) (39, 1.) row 39: (8, 1.) (12, 1.) (26, 1.) (29, 1.) (32, 1.) (38, 1.) (39, 1.) * A native back-tracking gives 8 colors, but all the algorithms in PETSc give 20 colors. Is it supposed to be like this? Fande, On Thu, Mar 23, 2017 at 10:50 AM, Hong wrote: > Fande, >> > > >> I was wondering if the coloring approaches listed online are working? >> Which ones are in parallel, and which ones are in sequential? >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/ >> Mat/MatColoringType.html#MatColoringType >> >> > > JP and Greedy are parallel. > >> >> If the coloring is in parallel, can it be used with the finite difference >> to compute the Jacobian? Any limitations? >> > > Yes, they work quite well. Git it a try. Let us know if you encounter any > problem. > > Hong > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Mar 23 17:02:23 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 23 Mar 2017 17:02:23 -0500 Subject: [petsc-users] coloring algorithms In-Reply-To: References: Message-ID: Please send the matrix as a binary file. Are you computing a distance one coloring or distance 2. 2 is needed for Jacobians. > On Mar 23, 2017, at 4:57 PM, Kong, Fande wrote: > > Thanks, Hong, > > I did some tests with a matrix (40x40): > > row 0: (0, 1.) (2, 1.) (3, 1.) (11, 1.) (14, 1.) (15, 1.) (19, 1.) (22, 1.) (23, 1.) (24, 1.) (27, 1.) (28, 1.) > row 1: (1, 1.) (2, 1.) (3, 1.) (6, 1.) (16, 1.) (17, 1.) (18, 1.) (21, 1.) (33, 1.) > row 2: (0, 1.) (1, 1.) (2, 1.) (3, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (16, 1.) (19, 1.) (20, 1.) > row 3: (0, 1.) (1, 1.) (2, 1.) (3, 1.) (5, 1.) (11, 1.) (18, 1.) (19, 1.) (21, 1.) (22, 1.) (31, 1.) (33, 1.) > row 4: (4, 1.) (14, 1.) (15, 1.) (19, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) (38, 1.) > row 5: (3, 1.) (5, 1.) (11, 1.) (17, 1.) (22, 1.) (26, 1.) (31, 1.) (32, 1.) (33, 1.) (34, 1.) > row 6: (1, 1.) (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (16, 1.) (20, 1.) (25, 1.) (30, 1.) > row 7: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (13, 1.) (17, 1.) (20, 1.) (32, 1.) (34, 1.) > row 8: (8, 1.) (9, 1.) (12, 1.) (13, 1.) (26, 1.) (29, 1.) (30, 1.) (36, 1.) (38, 1.) (39, 1.) > row 9: (2, 1.) (6, 1.) (7, 1.) (8, 1.) (9, 1.) (10, 1.) (13, 1.) (16, 1.) (17, 1.) (20, 1.) (25, 1.) (30, 1.) (34, 1.) > row 10: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (19, 1.) (20, 1.) (29, 1.) (32, 1.) (34, 1.) > row 11: (0, 1.) (3, 1.) (5, 1.) (11, 1.) (12, 1.) (14, 1.) (15, 1.) (19, 1.) (22, 1.) (23, 1.) (26, 1.) (27, 1.) (31, 1.) > row 12: (8, 1.) (11, 1.) (12, 1.) (13, 1.) (15, 1.) (22, 1.) (23, 1.) (26, 1.) (27, 1.) (35, 1.) (36, 1.) (39, 1.) > row 13: (7, 1.) (8, 1.) (9, 1.) (12, 1.) (13, 1.) (17, 1.) (23, 1.) (26, 1.) (30, 1.) (34, 1.) (35, 1.) (36, 1.) > row 14: (0, 1.) (4, 1.) (11, 1.) (14, 1.) (15, 1.) (19, 1.) (21, 1.) (23, 1.) (24, 1.) (25, 1.) (28, 1.) (38, 1.) > row 15: (0, 1.) (4, 1.) (11, 1.) (12, 1.) (14, 1.) (15, 1.) (18, 1.) (21, 1.) (23, 1.) (25, 1.) (27, 1.) (28, 1.) (35, 1.) (36, 1.) > row 16: (1, 1.) (2, 1.) (6, 1.) (9, 1.) (16, 1.) (18, 1.) (21, 1.) (25, 1.) (30, 1.) > row 17: (1, 1.) (5, 1.) (7, 1.) (9, 1.) (13, 1.) (17, 1.) (18, 1.) (21, 1.) (31, 1.) (33, 1.) (34, 1.) (35, 1.) (36, 1.) > row 18: (1, 1.) (3, 1.) (15, 1.) (16, 1.) (17, 1.) (18, 1.) (21, 1.) (23, 1.) (31, 1.) (33, 1.) (35, 1.) (36, 1.) > row 19: (0, 1.) (2, 1.) (3, 1.) (4, 1.) (10, 1.) (11, 1.) (14, 1.) (19, 1.) (20, 1.) (24, 1.) (29, 1.) (32, 1.) (38, 1.) > row 20: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (19, 1.) (20, 1.) > row 21: (1, 1.) (3, 1.) (14, 1.) (15, 1.) (16, 1.) (17, 1.) (18, 1.) (21, 1.) (23, 1.) (25, 1.) (28, 1.) (30, 1.) (33, 1.) (35, 1.) > row 22: (0, 1.) (3, 1.) (5, 1.) (11, 1.) (12, 1.) (22, 1.) (26, 1.) (27, 1.) (31, 1.) (32, 1.) (33, 1.) (34, 1.) > row 23: (0, 1.) (11, 1.) (12, 1.) (13, 1.) (14, 1.) (15, 1.) (18, 1.) (21, 1.) (23, 1.) (27, 1.) (35, 1.) (36, 1.) > row 24: (0, 1.) (4, 1.) (14, 1.) (19, 1.) (24, 1.) (25, 1.) (28, 1.) (29, 1.) (30, 1.) (37, 1.) (38, 1.) > row 25: (4, 1.) (6, 1.) (9, 1.) (14, 1.) (15, 1.) (16, 1.) (21, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) > row 26: (5, 1.) (8, 1.) (11, 1.) (12, 1.) (13, 1.) (22, 1.) (26, 1.) (27, 1.) (29, 1.) (32, 1.) (39, 1.) > row 27: (0, 1.) (11, 1.) (12, 1.) (15, 1.) (22, 1.) (23, 1.) (26, 1.) (27, 1.) (35, 1.) (36, 1.) > row 28: (0, 1.) (4, 1.) (14, 1.) (15, 1.) (21, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) > row 29: (8, 1.) (10, 1.) (19, 1.) (24, 1.) (26, 1.) (29, 1.) (32, 1.) (34, 1.) (38, 1.) (39, 1.) > row 30: (4, 1.) (6, 1.) (8, 1.) (9, 1.) (13, 1.) (16, 1.) (21, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) (38, 1.) > row 31: (3, 1.) (5, 1.) (11, 1.) (17, 1.) (18, 1.) (22, 1.) (31, 1.) (33, 1.) (34, 1.) > row 32: (5, 1.) (7, 1.) (10, 1.) (19, 1.) (22, 1.) (26, 1.) (29, 1.) (32, 1.) (34, 1.) (39, 1.) > row 33: (1, 1.) (3, 1.) (5, 1.) (17, 1.) (18, 1.) (21, 1.) (22, 1.) (31, 1.) (33, 1.) (34, 1.) (35, 1.) > row 34: (5, 1.) (7, 1.) (9, 1.) (10, 1.) (13, 1.) (17, 1.) (22, 1.) (29, 1.) (31, 1.) (32, 1.) (33, 1.) (34, 1.) > row 35: (12, 1.) (13, 1.) (15, 1.) (17, 1.) (18, 1.) (21, 1.) (23, 1.) (27, 1.) (33, 1.) (35, 1.) (36, 1.) > row 36: (8, 1.) (12, 1.) (13, 1.) (15, 1.) (17, 1.) (18, 1.) (23, 1.) (27, 1.) (35, 1.) (36, 1.) > row 37: (4, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) (38, 1.) > row 38: (4, 1.) (8, 1.) (14, 1.) (19, 1.) (24, 1.) (29, 1.) (30, 1.) (37, 1.) (38, 1.) (39, 1.) > row 39: (8, 1.) (12, 1.) (26, 1.) (29, 1.) (32, 1.) (38, 1.) (39, 1.) > > > A native back-tracking gives 8 colors, but all the algorithms in PETSc give 20 colors. Is it supposed to be like this? > > Fande, > > > On Thu, Mar 23, 2017 at 10:50 AM, Hong wrote: > Fande, > > I was wondering if the coloring approaches listed online are working? Which ones are in parallel, and which ones are in sequential? > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatColoringType.html#MatColoringType > > JP and Greedy are parallel. > > If the coloring is in parallel, can it be used with the finite difference to compute the Jacobian? Any limitations? > > Yes, they work quite well. Git it a try. Let us know if you encounter any problem. > > Hong > > From fande.kong at inl.gov Thu Mar 23 17:35:54 2017 From: fande.kong at inl.gov (Kong, Fande) Date: Thu, 23 Mar 2017 16:35:54 -0600 Subject: [petsc-users] coloring algorithms In-Reply-To: References: Message-ID: Thanks, Barry, On Thu, Mar 23, 2017 at 4:02 PM, Barry Smith wrote: > > Please send the matrix as a binary file. > > Are you computing a distance one coloring or distance 2. 2 is needed > for Jacobians. > The matrix does not come from PDE, and it is from a grain-tracking thing. Distance 1 did magic work. We have 8 colors now using JP, power,.. Thanks. Fande, > > > > On Mar 23, 2017, at 4:57 PM, Kong, Fande wrote: > > > > Thanks, Hong, > > > > I did some tests with a matrix (40x40): > > > > row 0: (0, 1.) (2, 1.) (3, 1.) (11, 1.) (14, 1.) (15, 1.) (19, > 1.) (22, 1.) (23, 1.) (24, 1.) (27, 1.) (28, 1.) > > row 1: (1, 1.) (2, 1.) (3, 1.) (6, 1.) (16, 1.) (17, 1.) (18, 1.) > (21, 1.) (33, 1.) > > row 2: (0, 1.) (1, 1.) (2, 1.) (3, 1.) (6, 1.) (7, 1.) (9, 1.) > (10, 1.) (16, 1.) (19, 1.) (20, 1.) > > row 3: (0, 1.) (1, 1.) (2, 1.) (3, 1.) (5, 1.) (11, 1.) (18, 1.) > (19, 1.) (21, 1.) (22, 1.) (31, 1.) (33, 1.) > > row 4: (4, 1.) (14, 1.) (15, 1.) (19, 1.) (24, 1.) (25, 1.) (28, > 1.) (30, 1.) (37, 1.) (38, 1.) > > row 5: (3, 1.) (5, 1.) (11, 1.) (17, 1.) (22, 1.) (26, 1.) (31, > 1.) (32, 1.) (33, 1.) (34, 1.) > > row 6: (1, 1.) (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (16, 1.) > (20, 1.) (25, 1.) (30, 1.) > > row 7: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (13, 1.) (17, 1.) > (20, 1.) (32, 1.) (34, 1.) > > row 8: (8, 1.) (9, 1.) (12, 1.) (13, 1.) (26, 1.) (29, 1.) (30, > 1.) (36, 1.) (38, 1.) (39, 1.) > > row 9: (2, 1.) (6, 1.) (7, 1.) (8, 1.) (9, 1.) (10, 1.) (13, 1.) > (16, 1.) (17, 1.) (20, 1.) (25, 1.) (30, 1.) (34, 1.) > > row 10: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (19, 1.) (20, > 1.) (29, 1.) (32, 1.) (34, 1.) > > row 11: (0, 1.) (3, 1.) (5, 1.) (11, 1.) (12, 1.) (14, 1.) (15, > 1.) (19, 1.) (22, 1.) (23, 1.) (26, 1.) (27, 1.) (31, 1.) > > row 12: (8, 1.) (11, 1.) (12, 1.) (13, 1.) (15, 1.) (22, 1.) (23, > 1.) (26, 1.) (27, 1.) (35, 1.) (36, 1.) (39, 1.) > > row 13: (7, 1.) (8, 1.) (9, 1.) (12, 1.) (13, 1.) (17, 1.) (23, > 1.) (26, 1.) (30, 1.) (34, 1.) (35, 1.) (36, 1.) > > row 14: (0, 1.) (4, 1.) (11, 1.) (14, 1.) (15, 1.) (19, 1.) (21, > 1.) (23, 1.) (24, 1.) (25, 1.) (28, 1.) (38, 1.) > > row 15: (0, 1.) (4, 1.) (11, 1.) (12, 1.) (14, 1.) (15, 1.) (18, > 1.) (21, 1.) (23, 1.) (25, 1.) (27, 1.) (28, 1.) (35, 1.) (36, 1.) > > row 16: (1, 1.) (2, 1.) (6, 1.) (9, 1.) (16, 1.) (18, 1.) (21, > 1.) (25, 1.) (30, 1.) > > row 17: (1, 1.) (5, 1.) (7, 1.) (9, 1.) (13, 1.) (17, 1.) (18, > 1.) (21, 1.) (31, 1.) (33, 1.) (34, 1.) (35, 1.) (36, 1.) > > row 18: (1, 1.) (3, 1.) (15, 1.) (16, 1.) (17, 1.) (18, 1.) (21, > 1.) (23, 1.) (31, 1.) (33, 1.) (35, 1.) (36, 1.) > > row 19: (0, 1.) (2, 1.) (3, 1.) (4, 1.) (10, 1.) (11, 1.) (14, > 1.) (19, 1.) (20, 1.) (24, 1.) (29, 1.) (32, 1.) (38, 1.) > > row 20: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (19, 1.) (20, 1.) > > row 21: (1, 1.) (3, 1.) (14, 1.) (15, 1.) (16, 1.) (17, 1.) (18, > 1.) (21, 1.) (23, 1.) (25, 1.) (28, 1.) (30, 1.) (33, 1.) (35, 1.) > > row 22: (0, 1.) (3, 1.) (5, 1.) (11, 1.) (12, 1.) (22, 1.) (26, > 1.) (27, 1.) (31, 1.) (32, 1.) (33, 1.) (34, 1.) > > row 23: (0, 1.) (11, 1.) (12, 1.) (13, 1.) (14, 1.) (15, 1.) (18, > 1.) (21, 1.) (23, 1.) (27, 1.) (35, 1.) (36, 1.) > > row 24: (0, 1.) (4, 1.) (14, 1.) (19, 1.) (24, 1.) (25, 1.) (28, > 1.) (29, 1.) (30, 1.) (37, 1.) (38, 1.) > > row 25: (4, 1.) (6, 1.) (9, 1.) (14, 1.) (15, 1.) (16, 1.) (21, > 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) > > row 26: (5, 1.) (8, 1.) (11, 1.) (12, 1.) (13, 1.) (22, 1.) (26, > 1.) (27, 1.) (29, 1.) (32, 1.) (39, 1.) > > row 27: (0, 1.) (11, 1.) (12, 1.) (15, 1.) (22, 1.) (23, 1.) (26, > 1.) (27, 1.) (35, 1.) (36, 1.) > > row 28: (0, 1.) (4, 1.) (14, 1.) (15, 1.) (21, 1.) (24, 1.) (25, > 1.) (28, 1.) (30, 1.) (37, 1.) > > row 29: (8, 1.) (10, 1.) (19, 1.) (24, 1.) (26, 1.) (29, 1.) (32, > 1.) (34, 1.) (38, 1.) (39, 1.) > > row 30: (4, 1.) (6, 1.) (8, 1.) (9, 1.) (13, 1.) (16, 1.) (21, > 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) (38, 1.) > > row 31: (3, 1.) (5, 1.) (11, 1.) (17, 1.) (18, 1.) (22, 1.) (31, > 1.) (33, 1.) (34, 1.) > > row 32: (5, 1.) (7, 1.) (10, 1.) (19, 1.) (22, 1.) (26, 1.) (29, > 1.) (32, 1.) (34, 1.) (39, 1.) > > row 33: (1, 1.) (3, 1.) (5, 1.) (17, 1.) (18, 1.) (21, 1.) (22, > 1.) (31, 1.) (33, 1.) (34, 1.) (35, 1.) > > row 34: (5, 1.) (7, 1.) (9, 1.) (10, 1.) (13, 1.) (17, 1.) (22, > 1.) (29, 1.) (31, 1.) (32, 1.) (33, 1.) (34, 1.) > > row 35: (12, 1.) (13, 1.) (15, 1.) (17, 1.) (18, 1.) (21, 1.) (23, > 1.) (27, 1.) (33, 1.) (35, 1.) (36, 1.) > > row 36: (8, 1.) (12, 1.) (13, 1.) (15, 1.) (17, 1.) (18, 1.) (23, > 1.) (27, 1.) (35, 1.) (36, 1.) > > row 37: (4, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) (38, > 1.) > > row 38: (4, 1.) (8, 1.) (14, 1.) (19, 1.) (24, 1.) (29, 1.) (30, > 1.) (37, 1.) (38, 1.) (39, 1.) > > row 39: (8, 1.) (12, 1.) (26, 1.) (29, 1.) (32, 1.) (38, 1.) (39, > 1.) > > > > > > A native back-tracking gives 8 colors, but all the algorithms in PETSc > give 20 colors. Is it supposed to be like this? > > > > Fande, > > > > > > On Thu, Mar 23, 2017 at 10:50 AM, Hong wrote: > > Fande, > > > > I was wondering if the coloring approaches listed online are working? > Which ones are in parallel, and which ones are in sequential? > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mcs. > anl.gov_petsc_petsc-2Dcurrent_docs_manualpages_Mat_MatColoringType.html- > 23MatColoringType&d=DwIFAg&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB_ > _aEkJFOKJFd00&r=DUUt3SRGI0_JgtNaS3udV68GRkgV4ts7XKfj2opmiCY&m= > VM8Mcai7YBTCMhYbGyMpwJvGX9atqPIWinrgSFeqUgM&s= > iUNa3SvixuSDyCXSXyjpn0kFV6u6kMspf5e0Uhqrssw&e= > > > > JP and Greedy are parallel. > > > > If the coloring is in parallel, can it be used with the finite > difference to compute the Jacobian? Any limitations? > > > > Yes, they work quite well. Git it a try. Let us know if you encounter > any problem. > > > > Hong > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Mar 23 17:51:41 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 23 Mar 2017 17:51:41 -0500 Subject: [petsc-users] coloring algorithms In-Reply-To: References: Message-ID: > On Mar 23, 2017, at 5:35 PM, Kong, Fande wrote: > > Thanks, Barry, > > On Thu, Mar 23, 2017 at 4:02 PM, Barry Smith wrote: > > Please send the matrix as a binary file. > > Are you computing a distance one coloring or distance 2. 2 is needed for Jacobians. > > The matrix does not come from PDE, and it is from a grain-tracking thing. Distance 1 did magic work. We have 8 colors now using JP, power,.. What are you going to use the coloring for? Barry > > Thanks. > > Fande, > > > > > > On Mar 23, 2017, at 4:57 PM, Kong, Fande wrote: > > > > Thanks, Hong, > > > > I did some tests with a matrix (40x40): > > > > row 0: (0, 1.) (2, 1.) (3, 1.) (11, 1.) (14, 1.) (15, 1.) (19, 1.) (22, 1.) (23, 1.) (24, 1.) (27, 1.) (28, 1.) > > row 1: (1, 1.) (2, 1.) (3, 1.) (6, 1.) (16, 1.) (17, 1.) (18, 1.) (21, 1.) (33, 1.) > > row 2: (0, 1.) (1, 1.) (2, 1.) (3, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (16, 1.) (19, 1.) (20, 1.) > > row 3: (0, 1.) (1, 1.) (2, 1.) (3, 1.) (5, 1.) (11, 1.) (18, 1.) (19, 1.) (21, 1.) (22, 1.) (31, 1.) (33, 1.) > > row 4: (4, 1.) (14, 1.) (15, 1.) (19, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) (38, 1.) > > row 5: (3, 1.) (5, 1.) (11, 1.) (17, 1.) (22, 1.) (26, 1.) (31, 1.) (32, 1.) (33, 1.) (34, 1.) > > row 6: (1, 1.) (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (16, 1.) (20, 1.) (25, 1.) (30, 1.) > > row 7: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (13, 1.) (17, 1.) (20, 1.) (32, 1.) (34, 1.) > > row 8: (8, 1.) (9, 1.) (12, 1.) (13, 1.) (26, 1.) (29, 1.) (30, 1.) (36, 1.) (38, 1.) (39, 1.) > > row 9: (2, 1.) (6, 1.) (7, 1.) (8, 1.) (9, 1.) (10, 1.) (13, 1.) (16, 1.) (17, 1.) (20, 1.) (25, 1.) (30, 1.) (34, 1.) > > row 10: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (19, 1.) (20, 1.) (29, 1.) (32, 1.) (34, 1.) > > row 11: (0, 1.) (3, 1.) (5, 1.) (11, 1.) (12, 1.) (14, 1.) (15, 1.) (19, 1.) (22, 1.) (23, 1.) (26, 1.) (27, 1.) (31, 1.) > > row 12: (8, 1.) (11, 1.) (12, 1.) (13, 1.) (15, 1.) (22, 1.) (23, 1.) (26, 1.) (27, 1.) (35, 1.) (36, 1.) (39, 1.) > > row 13: (7, 1.) (8, 1.) (9, 1.) (12, 1.) (13, 1.) (17, 1.) (23, 1.) (26, 1.) (30, 1.) (34, 1.) (35, 1.) (36, 1.) > > row 14: (0, 1.) (4, 1.) (11, 1.) (14, 1.) (15, 1.) (19, 1.) (21, 1.) (23, 1.) (24, 1.) (25, 1.) (28, 1.) (38, 1.) > > row 15: (0, 1.) (4, 1.) (11, 1.) (12, 1.) (14, 1.) (15, 1.) (18, 1.) (21, 1.) (23, 1.) (25, 1.) (27, 1.) (28, 1.) (35, 1.) (36, 1.) > > row 16: (1, 1.) (2, 1.) (6, 1.) (9, 1.) (16, 1.) (18, 1.) (21, 1.) (25, 1.) (30, 1.) > > row 17: (1, 1.) (5, 1.) (7, 1.) (9, 1.) (13, 1.) (17, 1.) (18, 1.) (21, 1.) (31, 1.) (33, 1.) (34, 1.) (35, 1.) (36, 1.) > > row 18: (1, 1.) (3, 1.) (15, 1.) (16, 1.) (17, 1.) (18, 1.) (21, 1.) (23, 1.) (31, 1.) (33, 1.) (35, 1.) (36, 1.) > > row 19: (0, 1.) (2, 1.) (3, 1.) (4, 1.) (10, 1.) (11, 1.) (14, 1.) (19, 1.) (20, 1.) (24, 1.) (29, 1.) (32, 1.) (38, 1.) > > row 20: (2, 1.) (6, 1.) (7, 1.) (9, 1.) (10, 1.) (19, 1.) (20, 1.) > > row 21: (1, 1.) (3, 1.) (14, 1.) (15, 1.) (16, 1.) (17, 1.) (18, 1.) (21, 1.) (23, 1.) (25, 1.) (28, 1.) (30, 1.) (33, 1.) (35, 1.) > > row 22: (0, 1.) (3, 1.) (5, 1.) (11, 1.) (12, 1.) (22, 1.) (26, 1.) (27, 1.) (31, 1.) (32, 1.) (33, 1.) (34, 1.) > > row 23: (0, 1.) (11, 1.) (12, 1.) (13, 1.) (14, 1.) (15, 1.) (18, 1.) (21, 1.) (23, 1.) (27, 1.) (35, 1.) (36, 1.) > > row 24: (0, 1.) (4, 1.) (14, 1.) (19, 1.) (24, 1.) (25, 1.) (28, 1.) (29, 1.) (30, 1.) (37, 1.) (38, 1.) > > row 25: (4, 1.) (6, 1.) (9, 1.) (14, 1.) (15, 1.) (16, 1.) (21, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) > > row 26: (5, 1.) (8, 1.) (11, 1.) (12, 1.) (13, 1.) (22, 1.) (26, 1.) (27, 1.) (29, 1.) (32, 1.) (39, 1.) > > row 27: (0, 1.) (11, 1.) (12, 1.) (15, 1.) (22, 1.) (23, 1.) (26, 1.) (27, 1.) (35, 1.) (36, 1.) > > row 28: (0, 1.) (4, 1.) (14, 1.) (15, 1.) (21, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) > > row 29: (8, 1.) (10, 1.) (19, 1.) (24, 1.) (26, 1.) (29, 1.) (32, 1.) (34, 1.) (38, 1.) (39, 1.) > > row 30: (4, 1.) (6, 1.) (8, 1.) (9, 1.) (13, 1.) (16, 1.) (21, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) (38, 1.) > > row 31: (3, 1.) (5, 1.) (11, 1.) (17, 1.) (18, 1.) (22, 1.) (31, 1.) (33, 1.) (34, 1.) > > row 32: (5, 1.) (7, 1.) (10, 1.) (19, 1.) (22, 1.) (26, 1.) (29, 1.) (32, 1.) (34, 1.) (39, 1.) > > row 33: (1, 1.) (3, 1.) (5, 1.) (17, 1.) (18, 1.) (21, 1.) (22, 1.) (31, 1.) (33, 1.) (34, 1.) (35, 1.) > > row 34: (5, 1.) (7, 1.) (9, 1.) (10, 1.) (13, 1.) (17, 1.) (22, 1.) (29, 1.) (31, 1.) (32, 1.) (33, 1.) (34, 1.) > > row 35: (12, 1.) (13, 1.) (15, 1.) (17, 1.) (18, 1.) (21, 1.) (23, 1.) (27, 1.) (33, 1.) (35, 1.) (36, 1.) > > row 36: (8, 1.) (12, 1.) (13, 1.) (15, 1.) (17, 1.) (18, 1.) (23, 1.) (27, 1.) (35, 1.) (36, 1.) > > row 37: (4, 1.) (24, 1.) (25, 1.) (28, 1.) (30, 1.) (37, 1.) (38, 1.) > > row 38: (4, 1.) (8, 1.) (14, 1.) (19, 1.) (24, 1.) (29, 1.) (30, 1.) (37, 1.) (38, 1.) (39, 1.) > > row 39: (8, 1.) (12, 1.) (26, 1.) (29, 1.) (32, 1.) (38, 1.) (39, 1.) > > > > > > A native back-tracking gives 8 colors, but all the algorithms in PETSc give 20 colors. Is it supposed to be like this? > > > > Fande, > > > > > > On Thu, Mar 23, 2017 at 10:50 AM, Hong wrote: > > Fande, > > > > I was wondering if the coloring approaches listed online are working? Which ones are in parallel, and which ones are in sequential? > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mcs.anl.gov_petsc_petsc-2Dcurrent_docs_manualpages_Mat_MatColoringType.html-23MatColoringType&d=DwIFAg&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=DUUt3SRGI0_JgtNaS3udV68GRkgV4ts7XKfj2opmiCY&m=VM8Mcai7YBTCMhYbGyMpwJvGX9atqPIWinrgSFeqUgM&s=iUNa3SvixuSDyCXSXyjpn0kFV6u6kMspf5e0Uhqrssw&e= > > > > JP and Greedy are parallel. > > > > If the coloring is in parallel, can it be used with the finite difference to compute the Jacobian? Any limitations? > > > > Yes, they work quite well. Git it a try. Let us know if you encounter any problem. > > > > Hong > > > > > > From ztdepyahoo at 163.com Thu Mar 23 20:31:18 2017 From: ztdepyahoo at 163.com (=?GBK?B?tqHAz8qm?=) Date: Fri, 24 Mar 2017 09:31:18 +0800 (CST) Subject: [petsc-users] How to know the cpu rank that hold the minimum vec values Message-ID: <3b89abcd.6846ad.15afdf07376.Coremail.ztdepyahoo@163.com> Dear professor: I got the location and value of the minimum in a vec with VecMin, But how to know the rank of CPU which holds the minimum values. This rank should be known in the whole communicator. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Mar 23 20:40:37 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 23 Mar 2017 20:40:37 -0500 Subject: [petsc-users] How to know the cpu rank that hold the minimum vec values In-Reply-To: <3b89abcd.6846ad.15afdf07376.Coremail.ztdepyahoo@163.com> References: <3b89abcd.6846ad.15afdf07376.Coremail.ztdepyahoo@163.com> Message-ID: <714B6D68-3E93-434B-AF5B-165C5B83EC45@mcs.anl.gov> > On Mar 23, 2017, at 8:31 PM, ??? wrote: > > Dear professor: > I got the location and value of the minimum in a vec with VecMin, But how to know the rank of CPU which holds the minimum values. > This rank should be known in the whole communicator. You can call VecGetOwnershipRanges() and then perform bisection on the resulting array to determine which process contains the vector location returned by VecMin(). Barry > > Regards > > > > From C.Klaij at marin.nl Fri Mar 24 03:05:27 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Fri, 24 Mar 2017 08:05:27 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl>, <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> Message-ID: <1490342727455.65990@marin.nl> Lawrence, I think you mean "-fieldsplit_1_mat_null_space_test"? This doesn't return any info, should it? Anyway, I've added a "call MatNullSpaceTest" to the code which returns "true" for the null space of A11. I also tried to run with "-fieldsplit_1_ksp_constant_null_space" so that the null space is only attached to S (and not to A11). Unfortunately, the behaviour is still the same: convergence in the preconditioned norm only. Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/Trim-wedge-optimization-with-viscous-free-surface-computations-1.htm ________________________________________ From: Lawrence Mitchell Sent: Thursday, March 23, 2017 4:52 PM To: Klaij, Christiaan; Matthew Knepley Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space On 23/03/17 15:37, Klaij, Christiaan wrote: > Yes, that's clearer, thanks! I do have is0 and is1 so I can try > PetscObjectCompose and let you know. > > Note though that the viewer reports that both S and A11 have a > null space attached... My matrix is a matnest and I've attached a > null space to A11, so the latter works as expected. But is the viewer > wrong for S? No, I think this is a consequence of using a matnest and attaching a nullspace to A11. In that case you sort of "can" set a nullspace on the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because you just get a reference. But if you switched to AIJ then you would no longer get this. So it happens that the nullspace you set on A11 /is/ transferred over to S, but this is luck, rather than design. So maybe there is something else wrong. Perhaps you can run with -fieldsplit_1_ksp_test_null_space to check the nullspace matches correctly? Lawrence From C.Klaij at marin.nl Fri Mar 24 07:34:32 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Fri, 24 Mar 2017 12:34:32 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <1490342727455.65990@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl>, <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk>, <1490342727455.65990@marin.nl> Message-ID: <1490358872715.1@marin.nl> I've also loaded the four blocks into matlab, computed Sp = A11 - A10 inv(diag(A00)) A01 and confirmed that Sp has indeed a constant null space. Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/Comfort-and-Safety-at-Sea-March-29-Rotterdam.htm ________________________________________ From: Klaij, Christiaan Sent: Friday, March 24, 2017 9:05 AM To: Lawrence Mitchell; Matthew Knepley Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space Lawrence, I think you mean "-fieldsplit_1_mat_null_space_test"? This doesn't return any info, should it? Anyway, I've added a "call MatNullSpaceTest" to the code which returns "true" for the null space of A11. I also tried to run with "-fieldsplit_1_ksp_constant_null_space" so that the null space is only attached to S (and not to A11). Unfortunately, the behaviour is still the same: convergence in the preconditioned norm only. Chris ________________________________________ From: Lawrence Mitchell Sent: Thursday, March 23, 2017 4:52 PM To: Klaij, Christiaan; Matthew Knepley Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space On 23/03/17 15:37, Klaij, Christiaan wrote: > Yes, that's clearer, thanks! I do have is0 and is1 so I can try > PetscObjectCompose and let you know. > > Note though that the viewer reports that both S and A11 have a > null space attached... My matrix is a matnest and I've attached a > null space to A11, so the latter works as expected. But is the viewer > wrong for S? No, I think this is a consequence of using a matnest and attaching a nullspace to A11. In that case you sort of "can" set a nullspace on the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because you just get a reference. But if you switched to AIJ then you would no longer get this. So it happens that the nullspace you set on A11 /is/ transferred over to S, but this is luck, rather than design. So maybe there is something else wrong. Perhaps you can run with -fieldsplit_1_ksp_test_null_space to check the nullspace matches correctly? Lawrence From alexandre.this at gmail.com Fri Mar 24 08:56:34 2017 From: alexandre.this at gmail.com (alexandre this) Date: Fri, 24 Mar 2017 14:56:34 +0100 Subject: [petsc-users] Very small right hand side and KSP_DIVERGED_DTOL Message-ID: Dear all, I'm in the configuration where I need to solve a linear system Ax = b where b is very small although not equal to zero. It appears that, in this configuration, the initial residual of the first iteration of the solver is completely out of reach and the KSP_DIVERGED_DTOL is raised. In particular, when using the "-ksp_monitor_true_residual flag", I get the following : 0 KSP preconditioned resid norm 2.907111674781e+00 true resid norm 1.955854211540e+02 ||r(i)||/||b|| 1.765818923254e+09 What is the best course of action in this case ? Best, Alexandre -- Alexandre This Doctorant CIFRE (Philips Healthcare - INRIA) : Fusion Image / Mod?les Num?riques pour la quantification de la s?v?rit? de la r?gurgitation mitrale Master 2 Recherche Math?matiques, Vision, Apprentissage - ENS Cachan Ing?nieur sp?cialisation Sant?&Technologie - ECE Paris Bachelor of Science : Informatics - Aalborg University, Danemark tel : 06.32.57.12.44 -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Fri Mar 24 10:11:39 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Fri, 24 Mar 2017 15:11:39 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <1490358872715.1@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl>, <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk>, <1490342727455.65990@marin.nl>, <1490358872715.1@marin.nl> Message-ID: <1490368299379.90828@marin.nl> I've written a small PETSc program that loads the four blocks, constructs Sp, attaches the null space and solves with a random rhs vector. This small program replicates the same behaviour as the real code: convergence in the preconditioned norm, stagnation in the unpreconditioned norm. But when I add a call to remove the null space from the rhs vector ("MatNullSpaceRemove"), I do get convergence in both norms! Clearly, the real code must somehow produce an inconsistent rhs vector. So the problem is indeed somewhere else and not in PCFieldSplit. Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/Meet-us-again-at-the-OTC-2017.htm ________________________________________ From: Klaij, Christiaan Sent: Friday, March 24, 2017 1:34 PM To: Lawrence Mitchell; Matthew Knepley Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space I've also loaded the four blocks into matlab, computed Sp = A11 - A10 inv(diag(A00)) A01 and confirmed that Sp has indeed a constant null space. Chris ________________________________________ From: Klaij, Christiaan Sent: Friday, March 24, 2017 9:05 AM To: Lawrence Mitchell; Matthew Knepley Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space Lawrence, I think you mean "-fieldsplit_1_mat_null_space_test"? This doesn't return any info, should it? Anyway, I've added a "call MatNullSpaceTest" to the code which returns "true" for the null space of A11. I also tried to run with "-fieldsplit_1_ksp_constant_null_space" so that the null space is only attached to S (and not to A11). Unfortunately, the behaviour is still the same: convergence in the preconditioned norm only. Chris ________________________________________ From: Lawrence Mitchell Sent: Thursday, March 23, 2017 4:52 PM To: Klaij, Christiaan; Matthew Knepley Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space On 23/03/17 15:37, Klaij, Christiaan wrote: > Yes, that's clearer, thanks! I do have is0 and is1 so I can try > PetscObjectCompose and let you know. > > Note though that the viewer reports that both S and A11 have a > null space attached... My matrix is a matnest and I've attached a > null space to A11, so the latter works as expected. But is the viewer > wrong for S? No, I think this is a consequence of using a matnest and attaching a nullspace to A11. In that case you sort of "can" set a nullspace on the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because you just get a reference. But if you switched to AIJ then you would no longer get this. So it happens that the nullspace you set on A11 /is/ transferred over to S, but this is luck, rather than design. So maybe there is something else wrong. Perhaps you can run with -fieldsplit_1_ksp_test_null_space to check the nullspace matches correctly? Lawrence From lawrence.mitchell at imperial.ac.uk Fri Mar 24 10:17:55 2017 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Fri, 24 Mar 2017 15:17:55 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <1490368299379.90828@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl> <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> <1490342727455.65990@marin.nl> <1490358872715.1@marin.nl> <1490368299379.90828@marin.nl> Message-ID: > On 24 Mar 2017, at 15:11, Klaij, Christiaan wrote: > > I've written a small PETSc program that loads the four blocks, > constructs Sp, attaches the null space and solves with a random > rhs vector. > > This small program replicates the same behaviour as the real > code: convergence in the preconditioned norm, stagnation in the > unpreconditioned norm. > > But when I add a call to remove the null space from the rhs > vector ("MatNullSpaceRemove"), I do get convergence in both > norms! Clearly, the real code must somehow produce an > inconsistent rhs vector. So the problem is indeed somewhere else > and not in PCFieldSplit. OK, that makes sense. If the right hand side is not consistent then you won't converge appropriately. You can ask PETSc to remove the null space from the right hand side, but you must call MatSetTransposeNullSpace as well. In the general case, the left and right null spaces are not the same... See the documentation here: http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatSetTransposeNullSpace.html#MatSetTransposeNullSpace and http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatSetNullSpace.html#MatSetNullSpace for more details on exactly what is going on. Lawrence From knepley at gmail.com Fri Mar 24 11:05:42 2017 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 24 Mar 2017 11:05:42 -0500 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl> <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> <1490342727455.65990@marin.nl> <1490358872715.1@marin.nl> <1490368299379.90828@marin.nl> Message-ID: On Fri, Mar 24, 2017 at 10:17 AM, Lawrence Mitchell < lawrence.mitchell at imperial.ac.uk> wrote: > > > On 24 Mar 2017, at 15:11, Klaij, Christiaan wrote: > > > > I've written a small PETSc program that loads the four blocks, > > constructs Sp, attaches the null space and solves with a random > > rhs vector. > > > > This small program replicates the same behaviour as the real > > code: convergence in the preconditioned norm, stagnation in the > > unpreconditioned norm. > > > > But when I add a call to remove the null space from the rhs > > vector ("MatNullSpaceRemove"), I do get convergence in both > > norms! Clearly, the real code must somehow produce an > > inconsistent rhs vector. So the problem is indeed somewhere else > > and not in PCFieldSplit. > > OK, that makes sense. If the right hand side is not consistent then you > won't converge appropriately. You can ask PETSc to remove the null space > from the right hand side, but you must call MatSetTransposeNullSpace as > well. In the general case, the left and right null spaces are not the > same... > > See the documentation here: > > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/ > Mat/MatSetTransposeNullSpace.html#MatSetTransposeNullSpace > > and > > http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/ > Mat/MatSetNullSpace.html#MatSetNullSpace > > for more details on exactly what is going on. Ah, Lawrence is right. I had not considered passing in an inconsistent rhs. We could have FS propagate the transpose nullspace as well (we don't do that now). Matt > > Lawrence -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Mar 24 11:17:04 2017 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 24 Mar 2017 11:17:04 -0500 Subject: [petsc-users] Very small right hand side and KSP_DIVERGED_DTOL In-Reply-To: References: Message-ID: On Fri, Mar 24, 2017 at 8:56 AM, alexandre this wrote: > Dear all, > > I'm in the configuration where I need to solve a linear system Ax = b > where b is very small although not equal to zero. > > It appears that, in this configuration, the initial residual of the first > iteration of the solver is completely out of reach and the > KSP_DIVERGED_DTOL is raised. > For any convergence question, we need the output from -ksp_monitor_true_residual -ksp_converged_reason -ksp_view > In particular, when using the "-ksp_monitor_true_residual flag", I get the > following : > 0 KSP preconditioned resid norm 2.907111674781e+00 true resid norm > 1.955854211540e+02 ||r(i)||/||b|| 1.765818923254e+09 > We need to see the error, not just the first line. You can always raise the tolerance, but divergence means that the norm is increasing. Matt > What is the best course of action in this case ? > > Best, > Alexandre > > > -- > Alexandre This > > Doctorant CIFRE (Philips Healthcare - INRIA) : Fusion Image / Mod?les > Num?riques pour la quantification de la s?v?rit? de la r?gurgitation m > itrale > > Master 2 Recherche Math?matiques, Vision, Apprentissage - ENS Cachan > Ing?nieur sp?cialisation Sant?&Technologie - ECE Paris > Bachelor of Science : Informatics - Aalborg University, Danemark > > tel : 06.32.57.12.44 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Mar 24 19:29:38 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 24 Mar 2017 19:29:38 -0500 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <1490368299379.90828@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl> <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> <1490342727455.65990@marin.nl> <1490358872715.1@marin.nl> <1490368299379.90828@marin.nl> Message-ID: <2BE9B65A-4F9A-4F4F-9764-79EA73BA767D@mcs.anl.gov> > On Mar 24, 2017, at 10:11 AM, Klaij, Christiaan wrote: > > I've written a small PETSc program that loads the four blocks, > constructs Sp, attaches the null space and solves with a random > rhs vector. > > This small program replicates the same behaviour as the real > code: convergence in the preconditioned norm, stagnation in the > unpreconditioned norm. > > But when I add a call to remove the null space from the rhs > vector ("MatNullSpaceRemove"), Are you removing the null space from the original full right hand side or inside the solver for the Schur complement problem? Note that if instead of using PCFIELDSPLIT you use some other simpler PC you should also see bad convergence, do you? Even if you use -pc_type svd you should see bad convergence? > I do get convergence in both > norms! Clearly, the real code must somehow produce an > inconsistent rhs vector. So the problem is indeed somewhere else > and not in PCFieldSplit. > > Chris > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Meet-us-again-at-the-OTC-2017.htm > > ________________________________________ > From: Klaij, Christiaan > Sent: Friday, March 24, 2017 1:34 PM > To: Lawrence Mitchell; Matthew Knepley > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] left and right preconditioning with a constant null space > > I've also loaded the four blocks into matlab, computed > > Sp = A11 - A10 inv(diag(A00)) A01 > > and confirmed that Sp has indeed a constant null space. > > Chris > ________________________________________ > From: Klaij, Christiaan > Sent: Friday, March 24, 2017 9:05 AM > To: Lawrence Mitchell; Matthew Knepley > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] left and right preconditioning with a constant null space > > Lawrence, > > I think you mean "-fieldsplit_1_mat_null_space_test"? This > doesn't return any info, should it? Anyway, I've added a "call > MatNullSpaceTest" to the code which returns "true" for the null > space of A11. > > I also tried to run with "-fieldsplit_1_ksp_constant_null_space" > so that the null space is only attached to S (and not to > A11). Unfortunately, the behaviour is still the same: convergence > in the preconditioned norm only. > > Chris > ________________________________________ > From: Lawrence Mitchell > Sent: Thursday, March 23, 2017 4:52 PM > To: Klaij, Christiaan; Matthew Knepley > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] left and right preconditioning with a constant null space > > On 23/03/17 15:37, Klaij, Christiaan wrote: >> Yes, that's clearer, thanks! I do have is0 and is1 so I can try >> PetscObjectCompose and let you know. >> >> Note though that the viewer reports that both S and A11 have a >> null space attached... My matrix is a matnest and I've attached a >> null space to A11, so the latter works as expected. But is the viewer >> wrong for S? > > No, I think this is a consequence of using a matnest and attaching a > nullspace to A11. In that case you sort of "can" set a nullspace on > the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because > you just get a reference. But if you switched to AIJ then you would > no longer get this. > > So it happens that the nullspace you set on A11 /is/ transferred over > to S, but this is luck, rather than design. > > So maybe there is something else wrong. Perhaps you can run with > -fieldsplit_1_ksp_test_null_space to check the nullspace matches > correctly? > > Lawrence > From bsmith at mcs.anl.gov Fri Mar 24 19:52:04 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 24 Mar 2017 19:52:04 -0500 Subject: [petsc-users] Very small right hand side and KSP_DIVERGED_DTOL In-Reply-To: References: Message-ID: > On Mar 24, 2017, at 8:56 AM, alexandre this wrote: > > Dear all, > > I'm in the configuration where I need to solve a linear system Ax = b where b is very small although not equal to zero. > > It appears that, in this configuration, the initial residual of the first iteration of the solver is completely out of reach and the KSP_DIVERGED_DTOL is raised. > > In particular, when using the "-ksp_monitor_true_residual flag", I get the following : > 0 KSP preconditioned resid norm 2.907111674781e+00 true resid norm 1.955854211540e+02 ||r(i)||/||b|| 1.765818923254e+09 Yeah this is a glitch in the divergence checking. A bad initial guess with a nearly zero right hand side can trigger this. There is special code in KSPConvergedDefault() when the right hand side is identical to zero if (!snorm) { ierr = PetscInfo(ksp,"Special case, user has provided nonzero initial guess and zero RHS\n");CHKERRQ(ierr); snorm = rnorm; } but not for "small" right hand side. You can use something like -ksp_divtol 1.e30 ( or bigger) to turn off the divtol test. Do any PETSc users or developers know a more systematic way to handle this issue with small right hand side and bad initial guess? Barry > > What is the best course of action in this case ? > > Best, > Alexandre > > > -- > Alexandre This > > Doctorant CIFRE (Philips Healthcare - INRIA) : Fusion Image / Mod?les Num?riques pour la quantification de la s?v?rit? de la r?gurgitation mitrale > > Master 2 Recherche Math?matiques, Vision, Apprentissage - ENS Cachan > Ing?nieur sp?cialisation Sant?&Technologie - ECE Paris > Bachelor of Science : Informatics - Aalborg University, Danemark > > tel : 06.32.57.12.44 From dalcinl at gmail.com Sat Mar 25 07:38:39 2017 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Sat, 25 Mar 2017 15:38:39 +0300 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: On 22 March 2017 at 20:29, Barry Smith wrote: > > Lisandro, > > We've had a couple questions similar to this with f2py; is there a way > we could add to the PETSc/SLEPc makefile rules something to allow people to > trivially use f2py without having to make their own (often incorrect) > manual command lines? > > Thanks > > Barry, it is quite hard and hacky to get f2py working in the general case. I think the email from Gaetan in this thread proves my point. IMHO, it is easier to write a small Fortran source exposing the API to call using ISO_C_BINDINGS, then wrap that code with the more traditional C-based "static" tools (SWIG, Cython) or even "dynamically" with ctypes or cffi (which use dlopen'ing). -- Lisandro Dalcin ============ Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/ 4700 King Abdullah University of Science and Technology al-Khawarizmi Bldg (Bldg 1), Office # 0109 Thuwal 23955-6900, Kingdom of Saudi Arabia http://www.kaust.edu.sa Office Phone: +966 12 808-0459 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Mar 25 12:31:17 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 25 Mar 2017 12:31:17 -0500 Subject: [petsc-users] Very small right hand side and KSP_DIVERGED_DTOL In-Reply-To: References: Message-ID: <048A86A3-9430-451A-87F3-F54F2B9D1482@mcs.anl.gov> > On Mar 25, 2017, at 9:11 AM, alexandre this wrote: > > This (the code paste that you provided) is exactly what I was thinking about/refering to when I posted my question to this mailing list. > > I changed the tolerance and the solver worked fine but it is definitely a bandage that is going to heal only the symptom. It will also disable the tolerance test in the later iterations while this test is here to prevent divergence of the results, right ? The divergence test is very very rarely needed in practice with Krylov methods; it is only there for when preconditioners "go crazy" because they have a bug in them. Though you are right the fix is "a bandage" you can confidently use it without concerns. > > Is there an option to disable the tolerance test only in the initial check (which I did not find as of yet) ? No. > > Best, > Alexandre > > > > 2017-03-25 1:52 GMT+01:00 Barry Smith : > > > On Mar 24, 2017, at 8:56 AM, alexandre this wrote: > > > > Dear all, > > > > I'm in the configuration where I need to solve a linear system Ax = b where b is very small although not equal to zero. > > > > It appears that, in this configuration, the initial residual of the first iteration of the solver is completely out of reach and the KSP_DIVERGED_DTOL is raised. > > > > In particular, when using the "-ksp_monitor_true_residual flag", I get the following : > > 0 KSP preconditioned resid norm 2.907111674781e+00 true resid norm 1.955854211540e+02 ||r(i)||/||b|| 1.765818923254e+09 > > Yeah this is a glitch in the divergence checking. A bad initial guess with a nearly zero right hand side can trigger this. There is special code in KSPConvergedDefault() when the right hand side is identical to zero > > if (!snorm) { > ierr = PetscInfo(ksp,"Special case, user has provided nonzero initial guess and zero RHS\n");CHKERRQ(ierr); > snorm = rnorm; > } > > but not for "small" right hand side. > > You can use something like -ksp_divtol 1.e30 ( or bigger) to turn off the divtol test. > > Do any PETSc users or developers know a more systematic way to handle this issue with small right hand side and bad initial guess? > > > Barry > > > > > > > What is the best course of action in this case ? > > > > Best, > > Alexandre From bodhi91 at iastate.edu Sat Mar 25 20:52:31 2017 From: bodhi91 at iastate.edu (Bodhisatta Pramanik) Date: Sat, 25 Mar 2017 20:52:31 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian Message-ID: Hi, I apologize for the slepc question. I could not find any user lists so I'm hoping someone on here might be able to offer some guidance. *Problem Definition:* I am working on a graph partitioning problem. I have the laplacian of a large graph(500,000 nodes) and am interested in extracting its global information in order to find good partitioning. An approach is to compute the first few(4-5) eigenvalues and use that information to formulate the partition algorithm. I am leveraging the EPS solvers of the Slepc library. It appears that the Jacobi-davidson eigen solver gives me the eigenvalues in the shortest period of time compared to others (Krylov-Schur, Rayleigh quotient, Lanczos, etc). I use this eigensolver with the conjugate gradient linear solver and the block-jacobi preconditioner. So this is what I am basically passing through the command line: ./RunPart -eps_type jd -eps_nev 4 -st_ksp_type cg -st_ksp_rtol 0.001 -eps_tol 0.001 -st_pc_type bjacobi -eps_smallest_real *Question:* The time it takes to compute the first 4-5 eigenvectors of a matrix of size (200k) is near about 60 seconds. CPU config: Intel Xeon 2GHz. I am using a single processor to run my code. Is there any way I can gain major speedup than what I am getting? Is it possible to obtain the eigenvalues inside 10-15 seconds of such huge matrices even if I do not use multiple processor?? Can someone provide me with some valuable guidance?? *Thanks,* Bodhi -------------- next part -------------- An HTML attachment was scrubbed... URL: From bhatiamanav at gmail.com Sat Mar 25 22:17:17 2017 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Sat, 25 Mar 2017 22:17:17 -0500 Subject: [petsc-users] approximating null space of matrix Message-ID: Hi, I am working on some continuation problems and need to approximate the null space of a matrix for bifurcation and branch switching. Are there algorithms available in Petsc to do this? Can Slepc be used to calculate the null space of a matrix? I guess the eigenvectors corresponding to zero eigenvalues would serve this purpose? Literature is pointing me in the direction of more elaborate Moore-Spence algorithms. I am not sure why an eigensolver cannot be used for this purpose. Any guidance would be greatly appreciated. Thanks, Manav From bsmith at mcs.anl.gov Sat Mar 25 22:22:51 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 25 Mar 2017 22:22:51 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: References: Message-ID: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> > On Mar 25, 2017, at 8:52 PM, Bodhisatta Pramanik wrote: > > Hi, > > I apologize for the slepc question. I could not find any user lists so I'm hoping someone on here might be able to offer some guidance. > > Problem Definition: > I am working on a graph partitioning problem. I have the laplacian of a large graph(500,000 nodes) and am interested in extracting its global information in order to find good partitioning. An approach is to compute the first few(4-5) eigenvalues and use that information to formulate the partition algorithm. > > I am leveraging the EPS solvers of the Slepc library. It appears that the Jacobi-davidson eigen solver gives me the eigenvalues in the shortest period of time compared to others (Krylov-Schur, Rayleigh quotient, Lanczos, etc). I use this eigensolver with the conjugate gradient linear solver and the block-jacobi preconditioner. So this is what I am basically passing through the command line: > > ./RunPart -eps_type jd -eps_nev 4 -st_ksp_type cg -st_ksp_rtol 0.001 -eps_tol 0.001 -st_pc_type bjacobi -eps_smallest_real Try -st_pc_type gamg One one process default bjacobi results in ILU which is not a great preconditioner. Also try -st_pc_type sor > > Question: > The time it takes to compute the first 4-5 eigenvectors of a matrix of size (200k) is near about 60 seconds. CPU config: Intel Xeon 2GHz. I am using a single processor to run my code. Is there any way I can gain major speedup than what I am getting? Run with -log_view to see where the computation is taking the most time. > > Is it possible to obtain the eigenvalues inside 10-15 seconds of such huge matrices even if I do not use multiple processor?? > > Can someone provide me with some valuable guidance?? Since the matrix is symmetric are you using the sbaij format instead of AIJ? > > Thanks, > Bodhi From bodhi91 at iastate.edu Sat Mar 25 22:48:01 2017 From: bodhi91 at iastate.edu (Bodhisatta Pramanik) Date: Sat, 25 Mar 2017 22:48:01 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> References: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> Message-ID: Try -st_pc_type gamg: Doing this with the existing eigensolver,linear solver : ./RunPart -eps_type jd -eps_nev 3 -st_ksp_type cg -st_ksp_rtol 0.001 -eps_tol 0.001 -st_pc_type gamg -eps_smallest_real results in the following issues:- PETSC ERROR: Object is in wrong state PETSC ERROR: Must call EPSSolve() first: Parameter #1 Try -st_pc_type sor Doing this again with the existing eigensolver,linear solver returns me the eigenvalues but it takes more time. (106 seconds) for a Laplacian matrix of 200K size. The bjacobi was giving me the same eigenvalues but in 60 seconds. Run with -log_view to see where the computation is taking the most time: EPSSolve and KSPSolve take the most time to finish computation. Since the matrix is symmetric are you using the sbaij format instead of AIJ? I am using the sbaij format but with block size 1. I guess that is equivalent to the AIJ format itself. Thanks, Bodhi On Sat, Mar 25, 2017 at 10:22 PM, Barry Smith wrote: > > > On Mar 25, 2017, at 8:52 PM, Bodhisatta Pramanik > wrote: > > > > Hi, > > > > I apologize for the slepc question. I could not find any user lists so > I'm hoping someone on here might be able to offer some guidance. > > > > Problem Definition: > > I am working on a graph partitioning problem. I have the laplacian of a > large graph(500,000 nodes) and am interested in extracting its global > information in order to find good partitioning. An approach is to compute > the first few(4-5) eigenvalues and use that information to formulate the > partition algorithm. > > > > I am leveraging the EPS solvers of the Slepc library. It appears that > the Jacobi-davidson eigen solver gives me the eigenvalues in the shortest > period of time compared to others (Krylov-Schur, Rayleigh quotient, > Lanczos, etc). I use this eigensolver with the conjugate gradient linear > solver and the block-jacobi preconditioner. So this is what I am basically > passing through the command line: > > > > ./RunPart -eps_type jd -eps_nev 4 -st_ksp_type cg -st_ksp_rtol 0.001 > -eps_tol 0.001 -st_pc_type bjacobi -eps_smallest_real > > Try -st_pc_type gamg > > One one process default bjacobi results in ILU which is not a great > preconditioner. > > Also try -st_pc_type sor > > > > > Question: > > The time it takes to compute the first 4-5 eigenvectors of a matrix of > size (200k) is near about 60 seconds. CPU config: Intel Xeon 2GHz. I am > using a single processor to run my code. Is there any way I can gain major > speedup than what I am getting? > > Run with -log_view to see where the computation is taking the most time. > > > > > Is it possible to obtain the eigenvalues inside 10-15 seconds of such > huge matrices even if I do not use multiple processor?? > > > > Can someone provide me with some valuable guidance?? > > Since the matrix is symmetric are you using the sbaij format instead of > AIJ? > > > > > Thanks, > > Bodhi > > -- *Bodhisatta Pramanik,* *Graduate Student,* *Department of Electrical and Computer Engineering,* *301 Durham,* *Iowa State University,* *Ames,Iowa 50011,* bodhi91 at iastate.edu *515-735-6300* -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Mar 25 22:56:16 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 25 Mar 2017 22:56:16 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: References: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> Message-ID: <946E0105-5927-4BF8-A2D2-D212F71C423F@mcs.anl.gov> > On Mar 25, 2017, at 10:48 PM, Bodhisatta Pramanik wrote: > > Try -st_pc_type gamg: > > Doing this with the existing eigensolver,linear solver : ./RunPart -eps_type jd -eps_nev 3 -st_ksp_type cg -st_ksp_rtol 0.001 -eps_tol 0.001 -st_pc_type gamg -eps_smallest_real results in the following issues:- > PETSC ERROR: Object is in wrong state > PETSC ERROR: Must call EPSSolve() first: Parameter #1 I cannot explain this. A different preconditioner with nothing else changed shouldn't cause any problem like this. Send all the output in the error message. > > Try -st_pc_type sor > > Doing this again with the existing eigensolver,linear solver returns me the eigenvalues but it takes more time. (106 seconds) for a Laplacian matrix of 200K size. The bjacobi was giving me the same eigenvalues but in 60 seconds. Ok, this is reasonable. > > > Run with -log_view to see where the computation is taking the most time: Send the output from -log_view > > EPSSolve and KSPSolve take the most time to finish computation. > > Since the matrix is symmetric are you using the sbaij format instead of AIJ? > > I am using the sbaij format but with block size 1. I guess that is equivalent to the AIJ format itself. Ok. Since it only stores half the matrix it requires less memory and because of the way it does the MatMult and MatSolve it can be a tiny bit faster. Barry > > > Thanks, > Bodhi > > > > > On Sat, Mar 25, 2017 at 10:22 PM, Barry Smith wrote: > > > On Mar 25, 2017, at 8:52 PM, Bodhisatta Pramanik wrote: > > > > Hi, > > > > I apologize for the slepc question. I could not find any user lists so I'm hoping someone on here might be able to offer some guidance. > > > > Problem Definition: > > I am working on a graph partitioning problem. I have the laplacian of a large graph(500,000 nodes) and am interested in extracting its global information in order to find good partitioning. An approach is to compute the first few(4-5) eigenvalues and use that information to formulate the partition algorithm. > > > > I am leveraging the EPS solvers of the Slepc library. It appears that the Jacobi-davidson eigen solver gives me the eigenvalues in the shortest period of time compared to others (Krylov-Schur, Rayleigh quotient, Lanczos, etc). I use this eigensolver with the conjugate gradient linear solver and the block-jacobi preconditioner. So this is what I am basically passing through the command line: > > > > ./RunPart -eps_type jd -eps_nev 4 -st_ksp_type cg -st_ksp_rtol 0.001 -eps_tol 0.001 -st_pc_type bjacobi -eps_smallest_real > > Try -st_pc_type gamg > > One one process default bjacobi results in ILU which is not a great preconditioner. > > Also try -st_pc_type sor > > > > > Question: > > The time it takes to compute the first 4-5 eigenvectors of a matrix of size (200k) is near about 60 seconds. CPU config: Intel Xeon 2GHz. I am using a single processor to run my code. Is there any way I can gain major speedup than what I am getting? > > Run with -log_view to see where the computation is taking the most time. > > > > > Is it possible to obtain the eigenvalues inside 10-15 seconds of such huge matrices even if I do not use multiple processor?? > > > > Can someone provide me with some valuable guidance?? > > Since the matrix is symmetric are you using the sbaij format instead of AIJ? > > > > > Thanks, > > Bodhi > > > > > -- > Bodhisatta Pramanik, > Graduate Student, > Department of Electrical and Computer Engineering, > 301 Durham, > Iowa State University, > Ames,Iowa 50011, > bodhi91 at iastate.edu > 515-735-6300 From bodhi91 at iastate.edu Sat Mar 25 23:08:18 2017 From: bodhi91 at iastate.edu (Bodhisatta Pramanik) Date: Sat, 25 Mar 2017 23:08:18 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: <946E0105-5927-4BF8-A2D2-D212F71C423F@mcs.anl.gov> References: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> <946E0105-5927-4BF8-A2D2-D212F71C423F@mcs.anl.gov> Message-ID: Send all the output in the error message: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: Must call EPSSolve() first: Parameter #1 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 [0]PETSC ERROR: ./RunPart on a arch-linux2-c-debug named research-5.ece.iastate.edu by bodhi91 Sat Mar 25 22:33:56 2017 [0]PETSC ERROR: Configure options --with-cc=gcc --with-css=g++ -with-fc=gfortran --download-fblaslapack --download-mpich [0]PETSC ERROR: #10468 EPSComputeError() line 643 in /tmp/Bodhi/slepc-3.7.3/src/eps/interface/epssolve.c 0.000000 2.0795e-317 Send the output from -log_view: I have attached the .txt file: "Log Summary".txt Ok. Since it only stores half the matrix it requires less memory and because of the way it does the MatMult and MatSolve it can be a tiny bit faster. I do get a little improvement in speed, but nothing significant. Thanks, Bodhi On Sat, Mar 25, 2017 at 10:56 PM, Barry Smith wrote: > > > On Mar 25, 2017, at 10:48 PM, Bodhisatta Pramanik > wrote: > > > > Try -st_pc_type gamg: > > > > Doing this with the existing eigensolver,linear solver : ./RunPart > -eps_type jd -eps_nev 3 -st_ksp_type cg -st_ksp_rtol 0.001 -eps_tol 0.001 > -st_pc_type gamg -eps_smallest_real results in the following issues:- > > PETSC ERROR: Object is in wrong state > > PETSC ERROR: Must call EPSSolve() first: Parameter #1 > > I cannot explain this. A different preconditioner with nothing else > changed shouldn't cause any problem like this. Send all the output in the > error message. > > > > Try -st_pc_type sor > > > > Doing this again with the existing eigensolver,linear solver returns me > the eigenvalues but it takes more time. (106 seconds) for a Laplacian > matrix of 200K size. The bjacobi was giving me the same eigenvalues but in > 60 seconds. > > Ok, this is reasonable. > > > > > > > Run with -log_view to see where the computation is taking the most time: > > Send the output from -log_view > > > > EPSSolve and KSPSolve take the most time to finish computation. > > > > Since the matrix is symmetric are you using the sbaij format instead of > AIJ? > > > > I am using the sbaij format but with block size 1. I guess that is > equivalent to the AIJ format itself. > > Ok. Since it only stores half the matrix it requires less memory and > because of the way it does the MatMult and MatSolve it can be a tiny bit > faster. > > Barry > > > > > > > Thanks, > > Bodhi > > > > > > > > > > On Sat, Mar 25, 2017 at 10:22 PM, Barry Smith > wrote: > > > > > On Mar 25, 2017, at 8:52 PM, Bodhisatta Pramanik > wrote: > > > > > > Hi, > > > > > > I apologize for the slepc question. I could not find any user lists so > I'm hoping someone on here might be able to offer some guidance. > > > > > > Problem Definition: > > > I am working on a graph partitioning problem. I have the laplacian of > a large graph(500,000 nodes) and am interested in extracting its global > information in order to find good partitioning. An approach is to compute > the first few(4-5) eigenvalues and use that information to formulate the > partition algorithm. > > > > > > I am leveraging the EPS solvers of the Slepc library. It appears that > the Jacobi-davidson eigen solver gives me the eigenvalues in the shortest > period of time compared to others (Krylov-Schur, Rayleigh quotient, > Lanczos, etc). I use this eigensolver with the conjugate gradient linear > solver and the block-jacobi preconditioner. So this is what I am basically > passing through the command line: > > > > > > ./RunPart -eps_type jd -eps_nev 4 -st_ksp_type cg -st_ksp_rtol 0.001 > -eps_tol 0.001 -st_pc_type bjacobi -eps_smallest_real > > > > Try -st_pc_type gamg > > > > One one process default bjacobi results in ILU which is not a great > preconditioner. > > > > Also try -st_pc_type sor > > > > > > > > Question: > > > The time it takes to compute the first 4-5 eigenvectors of a matrix of > size (200k) is near about 60 seconds. CPU config: Intel Xeon 2GHz. I am > using a single processor to run my code. Is there any way I can gain major > speedup than what I am getting? > > > > Run with -log_view to see where the computation is taking the most > time. > > > > > > > > Is it possible to obtain the eigenvalues inside 10-15 seconds of such > huge matrices even if I do not use multiple processor?? > > > > > > Can someone provide me with some valuable guidance?? > > > > Since the matrix is symmetric are you using the sbaij format instead > of AIJ? > > > > > > > > Thanks, > > > Bodhi > > > > > > > > > > -- > > Bodhisatta Pramanik, > > Graduate Student, > > Department of Electrical and Computer Engineering, > > 301 Durham, > > Iowa State University, > > Ames,Iowa 50011, > > bodhi91 at iastate.edu > > 515-735-6300 > > -- *Bodhisatta Pramanik,* *Graduate Student,* *Department of Electrical and Computer Engineering,* *301 Durham,* *Iowa State University,* *Ames,Iowa 50011,* bodhi91 at iastate.edu *515-735-6300* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Log Summary Type: application/octet-stream Size: 12384 bytes Desc: not available URL: From bsmith at mcs.anl.gov Sat Mar 25 23:15:27 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 25 Mar 2017 23:15:27 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: References: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> <946E0105-5927-4BF8-A2D2-D212F71C423F@mcs.anl.gov> Message-ID: First set a new PETSC_ARCH say arch-opt and ./configure with the additional argument --with-debugging=0 then recompile PETSc with the new PETSC_ARCH and recompile your code then send the new -log_view output. Barry ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## > On Mar 25, 2017, at 11:08 PM, Bodhisatta Pramanik wrote: > > > Send all the output in the error message: > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Object is in wrong state > [0]PETSC ERROR: Must call EPSSolve() first: Parameter #1 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 > [0]PETSC ERROR: ./RunPart on a arch-linux2-c-debug named research-5.ece.iastate.edu by bodhi91 Sat Mar 25 22:33:56 2017 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-css=g++ -with-fc=gfortran --download-fblaslapack --download-mpich > [0]PETSC ERROR: #10468 EPSComputeError() line 643 in /tmp/Bodhi/slepc-3.7.3/src/eps/interface/epssolve.c > 0.000000 2.0795e-317 > > > Send the output from -log_view: > I have attached the .txt file: "Log Summary".txt > > > Ok. Since it only stores half the matrix it requires less memory and because of the way it does the MatMult and MatSolve it can be a tiny bit faster. > I do get a little improvement in speed, but nothing significant. > > Thanks, > Bodhi > > > On Sat, Mar 25, 2017 at 10:56 PM, Barry Smith wrote: > > > On Mar 25, 2017, at 10:48 PM, Bodhisatta Pramanik wrote: > > > > Try -st_pc_type gamg: > > > > Doing this with the existing eigensolver,linear solver : ./RunPart -eps_type jd -eps_nev 3 -st_ksp_type cg -st_ksp_rtol 0.001 -eps_tol 0.001 -st_pc_type gamg -eps_smallest_real results in the following issues:- > > PETSC ERROR: Object is in wrong state > > PETSC ERROR: Must call EPSSolve() first: Parameter #1 > > I cannot explain this. A different preconditioner with nothing else changed shouldn't cause any problem like this. Send all the output in the error message. > > > > Try -st_pc_type sor > > > > Doing this again with the existing eigensolver,linear solver returns me the eigenvalues but it takes more time. (106 seconds) for a Laplacian matrix of 200K size. The bjacobi was giving me the same eigenvalues but in 60 seconds. > > Ok, this is reasonable. > > > > > > > Run with -log_view to see where the computation is taking the most time: > > Send the output from -log_view > > > > EPSSolve and KSPSolve take the most time to finish computation. > > > > Since the matrix is symmetric are you using the sbaij format instead of AIJ? > > > > I am using the sbaij format but with block size 1. I guess that is equivalent to the AIJ format itself. > > Ok. Since it only stores half the matrix it requires less memory and because of the way it does the MatMult and MatSolve it can be a tiny bit faster. > > Barry > > > > > > > Thanks, > > Bodhi > > > > > > > > > > On Sat, Mar 25, 2017 at 10:22 PM, Barry Smith wrote: > > > > > On Mar 25, 2017, at 8:52 PM, Bodhisatta Pramanik wrote: > > > > > > Hi, > > > > > > I apologize for the slepc question. I could not find any user lists so I'm hoping someone on here might be able to offer some guidance. > > > > > > Problem Definition: > > > I am working on a graph partitioning problem. I have the laplacian of a large graph(500,000 nodes) and am interested in extracting its global information in order to find good partitioning. An approach is to compute the first few(4-5) eigenvalues and use that information to formulate the partition algorithm. > > > > > > I am leveraging the EPS solvers of the Slepc library. It appears that the Jacobi-davidson eigen solver gives me the eigenvalues in the shortest period of time compared to others (Krylov-Schur, Rayleigh quotient, Lanczos, etc). I use this eigensolver with the conjugate gradient linear solver and the block-jacobi preconditioner. So this is what I am basically passing through the command line: > > > > > > ./RunPart -eps_type jd -eps_nev 4 -st_ksp_type cg -st_ksp_rtol 0.001 -eps_tol 0.001 -st_pc_type bjacobi -eps_smallest_real > > > > Try -st_pc_type gamg > > > > One one process default bjacobi results in ILU which is not a great preconditioner. > > > > Also try -st_pc_type sor > > > > > > > > Question: > > > The time it takes to compute the first 4-5 eigenvectors of a matrix of size (200k) is near about 60 seconds. CPU config: Intel Xeon 2GHz. I am using a single processor to run my code. Is there any way I can gain major speedup than what I am getting? > > > > Run with -log_view to see where the computation is taking the most time. > > > > > > > > Is it possible to obtain the eigenvalues inside 10-15 seconds of such huge matrices even if I do not use multiple processor?? > > > > > > Can someone provide me with some valuable guidance?? > > > > Since the matrix is symmetric are you using the sbaij format instead of AIJ? > > > > > > > > Thanks, > > > Bodhi > > > > > > > > > > -- > > Bodhisatta Pramanik, > > Graduate Student, > > Department of Electrical and Computer Engineering, > > 301 Durham, > > Iowa State University, > > Ames,Iowa 50011, > > bodhi91 at iastate.edu > > 515-735-6300 > > > > > -- > Bodhisatta Pramanik, > Graduate Student, > Department of Electrical and Computer Engineering, > 301 Durham, > Iowa State University, > Ames,Iowa 50011, > bodhi91 at iastate.edu > 515-735-6300 > From bodhi91 at iastate.edu Sat Mar 25 23:42:07 2017 From: bodhi91 at iastate.edu (Bodhisatta Pramanik) Date: Sat, 25 Mar 2017 23:42:07 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: References: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> <946E0105-5927-4BF8-A2D2-D212F71C423F@mcs.anl.gov> Message-ID: Let me get back to you. Thanks a lot. On Sat, Mar 25, 2017 at 11:15 PM, Barry Smith wrote: > > First set a new PETSC_ARCH say arch-opt and ./configure with the > additional argument --with-debugging=0 then recompile PETSc with the new > PETSC_ARCH and recompile your code then send the new -log_view output. > > Barry > > > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run ./configure # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > > > > On Mar 25, 2017, at 11:08 PM, Bodhisatta Pramanik > wrote: > > > > > > Send all the output in the error message: > > > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > [0]PETSC ERROR: Object is in wrong state > > [0]PETSC ERROR: Must call EPSSolve() first: Parameter #1 > > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > > [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 > > [0]PETSC ERROR: ./RunPart on a arch-linux2-c-debug named > research-5.ece.iastate.edu by bodhi91 Sat Mar 25 22:33:56 2017 > > [0]PETSC ERROR: Configure options --with-cc=gcc --with-css=g++ > -with-fc=gfortran --download-fblaslapack --download-mpich > > [0]PETSC ERROR: #10468 EPSComputeError() line 643 in > /tmp/Bodhi/slepc-3.7.3/src/eps/interface/epssolve.c > > 0.000000 2.0795e-317 > > > > > > Send the output from -log_view: > > I have attached the .txt file: "Log Summary".txt > > > > > > Ok. Since it only stores half the matrix it requires less memory and > because of the way it does the MatMult and MatSolve it can be a tiny bit > faster. > > I do get a little improvement in speed, but nothing significant. > > > > Thanks, > > Bodhi > > > > > > On Sat, Mar 25, 2017 at 10:56 PM, Barry Smith > wrote: > > > > > On Mar 25, 2017, at 10:48 PM, Bodhisatta Pramanik > wrote: > > > > > > Try -st_pc_type gamg: > > > > > > Doing this with the existing eigensolver,linear solver : ./RunPart > -eps_type jd -eps_nev 3 -st_ksp_type cg -st_ksp_rtol 0.001 -eps_tol 0.001 > -st_pc_type gamg -eps_smallest_real results in the following issues:- > > > PETSC ERROR: Object is in wrong state > > > PETSC ERROR: Must call EPSSolve() first: Parameter #1 > > > > I cannot explain this. A different preconditioner with nothing else > changed shouldn't cause any problem like this. Send all the output in the > error message. > > > > > > Try -st_pc_type sor > > > > > > Doing this again with the existing eigensolver,linear solver returns > me the eigenvalues but it takes more time. (106 seconds) for a Laplacian > matrix of 200K size. The bjacobi was giving me the same eigenvalues but in > 60 seconds. > > > > Ok, this is reasonable. > > > > > > > > > > > Run with -log_view to see where the computation is taking the most > time: > > > > Send the output from -log_view > > > > > > EPSSolve and KSPSolve take the most time to finish computation. > > > > > > Since the matrix is symmetric are you using the sbaij format instead > of AIJ? > > > > > > I am using the sbaij format but with block size 1. I guess that is > equivalent to the AIJ format itself. > > > > Ok. Since it only stores half the matrix it requires less memory and > because of the way it does the MatMult and MatSolve it can be a tiny bit > faster. > > > > Barry > > > > > > > > > > > Thanks, > > > Bodhi > > > > > > > > > > > > > > > On Sat, Mar 25, 2017 at 10:22 PM, Barry Smith > wrote: > > > > > > > On Mar 25, 2017, at 8:52 PM, Bodhisatta Pramanik < > bodhi91 at iastate.edu> wrote: > > > > > > > > Hi, > > > > > > > > I apologize for the slepc question. I could not find any user lists > so I'm hoping someone on here might be able to offer some guidance. > > > > > > > > Problem Definition: > > > > I am working on a graph partitioning problem. I have the laplacian > of a large graph(500,000 nodes) and am interested in extracting its global > information in order to find good partitioning. An approach is to compute > the first few(4-5) eigenvalues and use that information to formulate the > partition algorithm. > > > > > > > > I am leveraging the EPS solvers of the Slepc library. It appears > that the Jacobi-davidson eigen solver gives me the eigenvalues in the > shortest period of time compared to others (Krylov-Schur, Rayleigh > quotient, Lanczos, etc). I use this eigensolver with the conjugate gradient > linear solver and the block-jacobi preconditioner. So this is what I am > basically passing through the command line: > > > > > > > > ./RunPart -eps_type jd -eps_nev 4 -st_ksp_type cg -st_ksp_rtol 0.001 > -eps_tol 0.001 -st_pc_type bjacobi -eps_smallest_real > > > > > > Try -st_pc_type gamg > > > > > > One one process default bjacobi results in ILU which is not a great > preconditioner. > > > > > > Also try -st_pc_type sor > > > > > > > > > > > Question: > > > > The time it takes to compute the first 4-5 eigenvectors of a matrix > of size (200k) is near about 60 seconds. CPU config: Intel Xeon 2GHz. I am > using a single processor to run my code. Is there any way I can gain major > speedup than what I am getting? > > > > > > Run with -log_view to see where the computation is taking the most > time. > > > > > > > > > > > Is it possible to obtain the eigenvalues inside 10-15 seconds of > such huge matrices even if I do not use multiple processor?? > > > > > > > > Can someone provide me with some valuable guidance?? > > > > > > Since the matrix is symmetric are you using the sbaij format > instead of AIJ? > > > > > > > > > > > Thanks, > > > > Bodhi > > > > > > > > > > > > > > > -- > > > Bodhisatta Pramanik, > > > Graduate Student, > > > Department of Electrical and Computer Engineering, > > > 301 Durham, > > > Iowa State University, > > > Ames,Iowa 50011, > > > bodhi91 at iastate.edu > > > 515-735-6300 > > > > > > > > > > -- > > Bodhisatta Pramanik, > > Graduate Student, > > Department of Electrical and Computer Engineering, > > 301 Durham, > > Iowa State University, > > Ames,Iowa 50011, > > bodhi91 at iastate.edu > > 515-735-6300 > > > > -- *Bodhisatta Pramanik,* *Graduate Student,* *Department of Electrical and Computer Engineering,* *301 Durham,* *Iowa State University,* *Ames,Iowa 50011,* bodhi91 at iastate.edu *515-735-6300* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Sun Mar 26 02:41:16 2017 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 26 Mar 2017 09:41:16 +0200 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: References: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> <946E0105-5927-4BF8-A2D2-D212F71C423F@mcs.anl.gov> Message-ID: > El 26 mar 2017, a las 6:08, Bodhisatta Pramanik escribi?: > > Send all the output in the error message: > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Object is in wrong state > [0]PETSC ERROR: Must call EPSSolve() first: Parameter #1 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 > [0]PETSC ERROR: ./RunPart on a arch-linux2-c-debug named research-5.ece.iastate.edu by bodhi91 Sat Mar 25 22:33:56 2017 > [0]PETSC ERROR: Configure options --with-cc=gcc --with-css=g++ -with-fc=gfortran --download-fblaslapack --download-mpich > [0]PETSC ERROR: #10468 EPSComputeError() line 643 in /tmp/Bodhi/slepc-3.7.3/src/eps/interface/epssolve.c > 0.000000 2.0795e-317 > The error is in EPSComputeError(), not in EPSSolve(). Did you modify the state of the EPS object between EPSSolve() and EPSComputeError()? In graph partitioning, I would strongly recommend deflating the eigenvector corresponding to the zero eigenvalue, as is done in ex11.c: http://slepc.upv.es/documentation/current/src/eps/examples/tutorials/ex11.c.html Jose From jroman at dsic.upv.es Sun Mar 26 02:56:25 2017 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 26 Mar 2017 09:56:25 +0200 Subject: [petsc-users] approximating null space of matrix In-Reply-To: References: Message-ID: <15FB1F29-0821-4940-8266-551AC75AEFA4@dsic.upv.es> > El 26 mar 2017, a las 5:17, Manav Bhatia escribi?: > > Hi, > > I am working on some continuation problems and need to approximate the null space of a matrix for bifurcation and branch switching. > > Are there algorithms available in Petsc to do this? Can Slepc be used to calculate the null space of a matrix? I guess the eigenvectors corresponding to zero eigenvalues would serve this purpose? Literature is pointing me in the direction of more elaborate Moore-Spence algorithms. I am not sure why an eigensolver cannot be used for this purpose. > > Any guidance would be greatly appreciated. > > Thanks, > Manav > You can use SLEPc to compute null space vectors, but some care must be taken. You can do simple tests with ex11.c: http://slepc.upv.es/documentation/current/src/eps/examples/tutorials/ex11.c.html This example deflates the null space, but you can comment out the call to EPSSetDeflationSpace() and then the zero eigenvalue will be computed explicitly. The first comment is that the default convergence criterion is relative to the eigenvalue, which is obvioulsy bad for the zero eigenvalue. So you should switch to the absolute convergence criterion: -eps_conv_abs The above should work for a semi-definite matrix. If the matrix is indefinite then computing the null space will be more difficult. In this case, I would try computing the null space of A^2. Finally, if the size of the null space is large, then the solver might have more difficulties. If this happens, try reducing the tolerance or deactivating locking: -eps_krylovschur_locking 0 Jose From bodhi91 at iastate.edu Sun Mar 26 03:14:35 2017 From: bodhi91 at iastate.edu (Bodhisatta Pramanik) Date: Sun, 26 Mar 2017 03:14:35 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: References: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> <946E0105-5927-4BF8-A2D2-D212F71C423F@mcs.anl.gov> Message-ID: *The error is in EPSComputeError(), not in EPSSolve(). Did you modify the state of the EPS object between EPSSolve() and EPSComputeError()?* I am not modifying the eps object. I have copied a part of the code where I am computing the eigenvectors. EPSCreate(PETSC_COMM_WORLD,&eps); EPSSetOperators(eps,m.Lpl,NULL); EPSSetProblemType(eps,EPS_HEP); EPSSetFromOptions(eps); EPSSolve(eps); //Solving for the eigenvalues EPSGetIterationNumber(eps,&its); PetscPrintf(PETSC_COMM_WORLD,"Number of iterations of the method: %D\n",its); EPSGetType(eps,&type); PetscPrintf(PETSC_COMM_WORLD,"Solution Method: %s\n\n",type); EPSGetTolerances(eps,&get_tol,&maxit); PetscPrintf(PETSC_COMM_WORLD,"Stopping condition: tol=%.4g, maxit=%D\n",(double)get_tol,maxit); EPSGetConverged(eps,&nconv); PetscPrintf(PETSC_COMM_WORLD, "Number of converged eigenpairs: %D\n\n",nconv); if(nconv>0) { PetscPrintf(PETSC_COMM_WORLD, " k ||Ax-kx||/||kx||\n" "-------------- ----------------\n"); for(i=0;i wrote: > > > El 26 mar 2017, a las 6:08, Bodhisatta Pramanik > escribi?: > > > > Send all the output in the error message: > > > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > [0]PETSC ERROR: Object is in wrong state > > [0]PETSC ERROR: Must call EPSSolve() first: Parameter #1 > > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > > [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 > > [0]PETSC ERROR: ./RunPart on a arch-linux2-c-debug named > research-5.ece.iastate.edu by bodhi91 Sat Mar 25 22:33:56 2017 > > [0]PETSC ERROR: Configure options --with-cc=gcc --with-css=g++ > -with-fc=gfortran --download-fblaslapack --download-mpich > > [0]PETSC ERROR: #10468 EPSComputeError() line 643 in > /tmp/Bodhi/slepc-3.7.3/src/eps/interface/epssolve.c > > 0.000000 2.0795e-317 > > > > The error is in EPSComputeError(), not in EPSSolve(). Did you modify the > state of the EPS object between EPSSolve() and EPSComputeError()? > > In graph partitioning, I would strongly recommend deflating the > eigenvector corresponding to the zero eigenvalue, as is done in ex11.c: > http://slepc.upv.es/documentation/current/src/eps/ > examples/tutorials/ex11.c.html > > Jose > > -- *Bodhisatta Pramanik,* *Graduate Student,* *Department of Electrical and Computer Engineering,* *301 Durham,* *Iowa State University,* *Ames,Iowa 50011,* bodhi91 at iastate.edu *515-735-6300* -------------- next part -------------- An HTML attachment was scrubbed... URL: From bodhi91 at iastate.edu Sun Mar 26 06:25:27 2017 From: bodhi91 at iastate.edu (Bodhisatta Pramanik) Date: Sun, 26 Mar 2017 06:25:27 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: References: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> <946E0105-5927-4BF8-A2D2-D212F71C423F@mcs.anl.gov> Message-ID: *First set a new PETSC_ARCH say arch-opt and ./configure with the additional argument --with-debugging=0 then recompile PETSc with the new PETSC_ARCH and recompile your code then send the new -log_view output.* I have attached the new log summary with this file. The computation time is a lot faster now. It takes around 23 seconds to compute the first 3 eigenvectors of a 200K sized matrix. Thanks, Bodhi On Sun, Mar 26, 2017 at 3:14 AM, Bodhisatta Pramanik wrote: > *The error is in EPSComputeError(), not in EPSSolve(). Did you modify the > state of the EPS object between EPSSolve() and EPSComputeError()?* > > I am not modifying the eps object. I have copied a part of the code where > I am computing the eigenvectors. > > EPSCreate(PETSC_COMM_WORLD,&eps); > EPSSetOperators(eps,m.Lpl,NULL); > EPSSetProblemType(eps,EPS_HEP); > EPSSetFromOptions(eps); > > EPSSolve(eps); //Solving for the eigenvalues > > EPSGetIterationNumber(eps,&its); > PetscPrintf(PETSC_COMM_WORLD,"Number of iterations of the method: > %D\n",its); > EPSGetType(eps,&type); > PetscPrintf(PETSC_COMM_WORLD,"Solution Method: %s\n\n",type); > EPSGetTolerances(eps,&get_tol,&maxit); > PetscPrintf(PETSC_COMM_WORLD,"Stopping condition: tol=%.4g, > maxit=%D\n",(double)get_tol,maxit); > EPSGetConverged(eps,&nconv); > PetscPrintf(PETSC_COMM_WORLD, "Number of converged eigenpairs: > %D\n\n",nconv); > if(nconv>0) > { > PetscPrintf(PETSC_COMM_WORLD, > " k ||Ax-kx||/||kx||\n" > "-------------- ----------------\n"); > for(i=0;i { > EPSGetEigenpair(eps,i,&kr,&ki,xr,xi); > EPSComputeError(eps,i,EPS_ERROR_RELATIVE,&error); > re = PetscRealPart(kr); > im = PetscImaginaryPart(kr); > > re = kr; > im = ki; > if(im!=0.0) { > PetscPrintf(PETSC_COMM_WORLD," %9f%+9fi > %12g\n",(double)re,(double)im,(double)error); > } > else { > PetscPrintf(PETSC_COMM_WORLD," %12f > %12g\n",(double)re,(double)error); > } > } > PetscPrintf(PETSC_COMM_WORLD,"\n"); > } > > EPSDestroy(&eps); > VecDestroy(&xr); > VecDestroy(&xi); > } > > This is what I pass through my command line: > ./RunPart -eps_type jd -eps_nev 3 -st_ksp_type cg -st_ksp_rtol 0.001 > -eps_tol 0.001 -st_pc_type bjacobi -eps_smallest_real > > > *In graph partitioning, I would strongly recommend deflating the > eigenvector corresponding to the zero eigenvalue, as is done in ex11.c:* > I have tried doing this but for some reason the eigenvalues fail > to converge. > > > > On Sun, Mar 26, 2017 at 2:41 AM, Jose E. Roman wrote: > >> >> > El 26 mar 2017, a las 6:08, Bodhisatta Pramanik >> escribi?: >> > >> > Send all the output in the error message: >> > >> > [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> > [0]PETSC ERROR: Object is in wrong state >> > [0]PETSC ERROR: Must call EPSSolve() first: Parameter #1 >> > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> > [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 >> > [0]PETSC ERROR: ./RunPart on a arch-linux2-c-debug named >> research-5.ece.iastate.edu by bodhi91 Sat Mar 25 22:33:56 2017 >> > [0]PETSC ERROR: Configure options --with-cc=gcc --with-css=g++ >> -with-fc=gfortran --download-fblaslapack --download-mpich >> > [0]PETSC ERROR: #10468 EPSComputeError() line 643 in >> /tmp/Bodhi/slepc-3.7.3/src/eps/interface/epssolve.c >> > 0.000000 2.0795e-317 >> > >> >> The error is in EPSComputeError(), not in EPSSolve(). Did you modify the >> state of the EPS object between EPSSolve() and EPSComputeError()? >> >> In graph partitioning, I would strongly recommend deflating the >> eigenvector corresponding to the zero eigenvalue, as is done in ex11.c: >> http://slepc.upv.es/documentation/current/src/eps/examples/ >> tutorials/ex11.c.html >> >> Jose >> >> > > > -- > *Bodhisatta Pramanik,* > *Graduate Student,* > *Department of Electrical and Computer Engineering,* > *301 Durham,* > *Iowa State University,* > *Ames,Iowa 50011,* > bodhi91 at iastate.edu > *515-735-6300 <(515)%20735-6300>* > -- *Bodhisatta Pramanik,* *Graduate Student,* *Department of Electrical and Computer Engineering,* *301 Durham,* *Iowa State University,* *Ames,Iowa 50011,* bodhi91 at iastate.edu *515-735-6300* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: New Log Summary Type: application/octet-stream Size: 11348 bytes Desc: not available URL: From bhatiamanav at gmail.com Sun Mar 26 10:37:58 2017 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Sun, 26 Mar 2017 10:37:58 -0500 Subject: [petsc-users] Interface to Trilinos LOCA framework Message-ID: <37B07AF9-8144-4E0F-BF97-037E01CA1377@gmail.com> Hi, My code currently uses Petsc for all linear/nonlinear solution functionality. I am looking to see if I can tap into Trilinos LOCA framework for bifurcation problems without having to reimplement all solver interfaces in my code. I noticed that Petsc currently is able to tap into the ML framework, so I am guessing there are interfaces available for Petsc and Trillions to work together. Would I be able to extend this to tap into LOCA? I would appreciate if someone could point me in the right direction here. Regards, Manav From bhatiamanav at gmail.com Sun Mar 26 11:25:44 2017 From: bhatiamanav at gmail.com (Manav Bhatia) Date: Sun, 26 Mar 2017 11:25:44 -0500 Subject: [petsc-users] approximating null space of matrix In-Reply-To: <15FB1F29-0821-4940-8266-551AC75AEFA4@dsic.upv.es> References: <15FB1F29-0821-4940-8266-551AC75AEFA4@dsic.upv.es> Message-ID: <63EF6BB1-ABCF-418E-B36A-18F274AC6EFD@gmail.com> Thanks Jose! What would be a good way to calculate null space of A^2? Would I need to create a shell matrix for A^2 and define matrix-vector products on this? -Manav > On Mar 26, 2017, at 2:56 AM, Jose E. Roman wrote: > > >> El 26 mar 2017, a las 5:17, Manav Bhatia escribi?: >> >> Hi, >> >> I am working on some continuation problems and need to approximate the null space of a matrix for bifurcation and branch switching. >> >> Are there algorithms available in Petsc to do this? Can Slepc be used to calculate the null space of a matrix? I guess the eigenvectors corresponding to zero eigenvalues would serve this purpose? Literature is pointing me in the direction of more elaborate Moore-Spence algorithms. I am not sure why an eigensolver cannot be used for this purpose. >> >> Any guidance would be greatly appreciated. >> >> Thanks, >> Manav >> > > You can use SLEPc to compute null space vectors, but some care must be taken. > > You can do simple tests with ex11.c: > http://slepc.upv.es/documentation/current/src/eps/examples/tutorials/ex11.c.html > This example deflates the null space, but you can comment out the call to EPSSetDeflationSpace() and then the zero eigenvalue will be computed explicitly. > > The first comment is that the default convergence criterion is relative to the eigenvalue, which is obvioulsy bad for the zero eigenvalue. So you should switch to the absolute convergence criterion: -eps_conv_abs > > The above should work for a semi-definite matrix. If the matrix is indefinite then computing the null space will be more difficult. In this case, I would try computing the null space of A^2. > > Finally, if the size of the null space is large, then the solver might have more difficulties. If this happens, try reducing the tolerance or deactivating locking: -eps_krylovschur_locking 0 > > Jose From jroman at dsic.upv.es Sun Mar 26 11:30:37 2017 From: jroman at dsic.upv.es (Jose E. Roman) Date: Sun, 26 Mar 2017 18:30:37 +0200 Subject: [petsc-users] approximating null space of matrix In-Reply-To: <63EF6BB1-ABCF-418E-B36A-18F274AC6EFD@gmail.com> References: <15FB1F29-0821-4940-8266-551AC75AEFA4@dsic.upv.es> <63EF6BB1-ABCF-418E-B36A-18F274AC6EFD@gmail.com> Message-ID: Yes. You can use ex24.c with target=0 http://slepc.upv.es/documentation/current/src/eps/examples/tutorials/ex24.c.html Jose > El 26 mar 2017, a las 18:25, Manav Bhatia escribi?: > > Thanks Jose! > > What would be a good way to calculate null space of A^2? Would I need to create a shell matrix for A^2 and define matrix-vector products on this? > > -Manav > > >> On Mar 26, 2017, at 2:56 AM, Jose E. Roman wrote: >> >> >>> El 26 mar 2017, a las 5:17, Manav Bhatia escribi?: >>> >>> Hi, >>> >>> I am working on some continuation problems and need to approximate the null space of a matrix for bifurcation and branch switching. >>> >>> Are there algorithms available in Petsc to do this? Can Slepc be used to calculate the null space of a matrix? I guess the eigenvectors corresponding to zero eigenvalues would serve this purpose? Literature is pointing me in the direction of more elaborate Moore-Spence algorithms. I am not sure why an eigensolver cannot be used for this purpose. >>> >>> Any guidance would be greatly appreciated. >>> >>> Thanks, >>> Manav >>> >> >> You can use SLEPc to compute null space vectors, but some care must be taken. >> >> You can do simple tests with ex11.c: >> http://slepc.upv.es/documentation/current/src/eps/examples/tutorials/ex11.c.html >> This example deflates the null space, but you can comment out the call to EPSSetDeflationSpace() and then the zero eigenvalue will be computed explicitly. >> >> The first comment is that the default convergence criterion is relative to the eigenvalue, which is obvioulsy bad for the zero eigenvalue. So you should switch to the absolute convergence criterion: -eps_conv_abs >> >> The above should work for a semi-definite matrix. If the matrix is indefinite then computing the null space will be more difficult. In this case, I would try computing the null space of A^2. >> >> Finally, if the size of the null space is large, then the solver might have more difficulties. If this happens, try reducing the tolerance or deactivating locking: -eps_krylovschur_locking 0 >> >> Jose > From bsmith at mcs.anl.gov Sun Mar 26 12:25:20 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 26 Mar 2017 12:25:20 -0500 Subject: [petsc-users] Interface to Trilinos LOCA framework In-Reply-To: <37B07AF9-8144-4E0F-BF97-037E01CA1377@gmail.com> References: <37B07AF9-8144-4E0F-BF97-037E01CA1377@gmail.com> Message-ID: > On Mar 26, 2017, at 10:37 AM, Manav Bhatia wrote: > > Hi, > > My code currently uses Petsc for all linear/nonlinear solution functionality. I am looking to see if I can tap into Trilinos LOCA framework for bifurcation problems without having to reimplement all solver interfaces in my code. I noticed that Petsc currently is able to tap into the ML framework, so I am guessing there are interfaces available for Petsc and Trillions to work together. Would I be able to extend this to tap into LOCA? > > I would appreciate if someone could point me in the right direction here. There are no explicit interfaces for LOCA/PETSc. Depending on how LOCA is written you may be able to still utilize the PETSc solvers underneath or there may not be. You would have to read the LOCA documentation to determine if it is possible. Barry > > Regards, > Manav > From bsmith at mcs.anl.gov Sun Mar 26 12:30:06 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 26 Mar 2017 12:30:06 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: References: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> <946E0105-5927-4BF8-A2D2-D212F71C423F@mcs.anl.gov> Message-ID: <288CD162-9242-480C-8117-FB73214934C6@mcs.anl.gov> The flop rate for the MatMult and MatSolve are much lower than I would expect. Try using the AIJ matrix instead of SBAIJ. > On Mar 26, 2017, at 6:25 AM, Bodhisatta Pramanik wrote: > > *First set a new PETSC_ARCH say arch-opt and ./configure with the additional argument --with-debugging=0 then recompile PETSc with the new PETSC_ARCH and recompile your code then send the new -log_view output.* > I have attached the new log summary with this file. The computation time is a lot faster now. It takes around 23 seconds to compute the first 3 eigenvectors of a 200K sized matrix. > > Thanks, > Bodhi > > On Sun, Mar 26, 2017 at 3:14 AM, Bodhisatta Pramanik wrote: > *The error is in EPSComputeError(), not in EPSSolve(). Did you modify the state of the EPS object between EPSSolve() and EPSComputeError()?* > > I am not modifying the eps object. I have copied a part of the code where I am computing the eigenvectors. > > EPSCreate(PETSC_COMM_WORLD,&eps); > EPSSetOperators(eps,m.Lpl,NULL); > EPSSetProblemType(eps,EPS_HEP); > EPSSetFromOptions(eps); > > EPSSolve(eps); //Solving for the eigenvalues > > EPSGetIterationNumber(eps,&its); > PetscPrintf(PETSC_COMM_WORLD,"Number of iterations of the method: %D\n",its); > EPSGetType(eps,&type); > PetscPrintf(PETSC_COMM_WORLD,"Solution Method: %s\n\n",type); > EPSGetTolerances(eps,&get_tol,&maxit); > PetscPrintf(PETSC_COMM_WORLD,"Stopping condition: tol=%.4g, maxit=%D\n",(double)get_tol,maxit); > EPSGetConverged(eps,&nconv); > PetscPrintf(PETSC_COMM_WORLD, "Number of converged eigenpairs: %D\n\n",nconv); > if(nconv>0) > { > PetscPrintf(PETSC_COMM_WORLD, > " k ||Ax-kx||/||kx||\n" > "-------------- ----------------\n"); > for(i=0;i { > EPSGetEigenpair(eps,i,&kr,&ki,xr,xi); > EPSComputeError(eps,i,EPS_ERROR_RELATIVE,&error); > re = PetscRealPart(kr); > im = PetscImaginaryPart(kr); > > re = kr; > im = ki; > if(im!=0.0) { > PetscPrintf(PETSC_COMM_WORLD," %9f%+9fi %12g\n",(double)re,(double)im,(double)error); > } > else { > PetscPrintf(PETSC_COMM_WORLD," %12f %12g\n",(double)re,(double)error); > } > } > PetscPrintf(PETSC_COMM_WORLD,"\n"); > } > > EPSDestroy(&eps); > VecDestroy(&xr); > VecDestroy(&xi); > } > > This is what I pass through my command line: > ./RunPart -eps_type jd -eps_nev 3 -st_ksp_type cg -st_ksp_rtol 0.001 -eps_tol 0.001 -st_pc_type bjacobi -eps_smallest_real > > > *In graph partitioning, I would strongly recommend deflating the eigenvector corresponding to the zero eigenvalue, as is done in ex11.c:* > I have tried doing this but for some reason the eigenvalues fail to converge. > > > > On Sun, Mar 26, 2017 at 2:41 AM, Jose E. Roman wrote: > > > El 26 mar 2017, a las 6:08, Bodhisatta Pramanik escribi?: > > > > Send all the output in the error message: > > > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > > [0]PETSC ERROR: Object is in wrong state > > [0]PETSC ERROR: Must call EPSSolve() first: Parameter #1 > > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > > [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 > > [0]PETSC ERROR: ./RunPart on a arch-linux2-c-debug named research-5.ece.iastate.edu by bodhi91 Sat Mar 25 22:33:56 2017 > > [0]PETSC ERROR: Configure options --with-cc=gcc --with-css=g++ -with-fc=gfortran --download-fblaslapack --download-mpich > > [0]PETSC ERROR: #10468 EPSComputeError() line 643 in /tmp/Bodhi/slepc-3.7.3/src/eps/interface/epssolve.c > > 0.000000 2.0795e-317 > > > > The error is in EPSComputeError(), not in EPSSolve(). Did you modify the state of the EPS object between EPSSolve() and EPSComputeError()? > > In graph partitioning, I would strongly recommend deflating the eigenvector corresponding to the zero eigenvalue, as is done in ex11.c: > http://slepc.upv.es/documentation/current/src/eps/examples/tutorials/ex11.c.html > > Jose > > > > > -- > Bodhisatta Pramanik, > Graduate Student, > Department of Electrical and Computer Engineering, > 301 Durham, > Iowa State University, > Ames,Iowa 50011, > bodhi91 at iastate.edu > 515-735-6300 > > > > -- > Bodhisatta Pramanik, > Graduate Student, > Department of Electrical and Computer Engineering, > 301 Durham, > Iowa State University, > Ames,Iowa 50011, > bodhi91 at iastate.edu > 515-735-6300 > From bodhi91 at iastate.edu Sun Mar 26 14:44:36 2017 From: bodhi91 at iastate.edu (Bodhisatta Pramanik) Date: Sun, 26 Mar 2017 14:44:36 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: <288CD162-9242-480C-8117-FB73214934C6@mcs.anl.gov> References: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> <946E0105-5927-4BF8-A2D2-D212F71C423F@mcs.anl.gov> <288CD162-9242-480C-8117-FB73214934C6@mcs.anl.gov> Message-ID: *The flop rate for the MatMult and MatSolve are much lower than I would expect. Try using the AIJ matrix instead of SBAIJ.* SeqAIJ: MatMult = 213 Mflop/s MatSolve = 204 Mflop/s AIJ: MatMult = 211 Mflop/s MatSolve = 198 Mflop/s SBAIJ: MatMult = 179 Mflop/s MatSolve = 167 Mflop/s Shouldn't the SBAIJ provide the best performance since it only works with the upper triangular part of the whole large sparse symmetric matrix? Thanks, Bodhi On Sun, Mar 26, 2017 at 12:30 PM, Barry Smith wrote: > > The flop rate for the MatMult and MatSolve are much lower than I would > expect. Try using the AIJ matrix instead of SBAIJ. > > > > On Mar 26, 2017, at 6:25 AM, Bodhisatta Pramanik > wrote: > > > > *First set a new PETSC_ARCH say arch-opt and ./configure with the > additional argument --with-debugging=0 then recompile PETSc with the new > PETSC_ARCH and recompile your code then send the new -log_view output.* > > I have attached the new log summary with this file. The computation > time is a lot faster now. It takes around 23 seconds to compute the first 3 > eigenvectors of a 200K sized matrix. > > > > Thanks, > > Bodhi > > > > On Sun, Mar 26, 2017 at 3:14 AM, Bodhisatta Pramanik < > bodhi91 at iastate.edu> wrote: > > *The error is in EPSComputeError(), not in EPSSolve(). Did you modify > the state of the EPS object between EPSSolve() and EPSComputeError()?* > > > > I am not modifying the eps object. I have copied a part of the code > where I am computing the eigenvectors. > > > > EPSCreate(PETSC_COMM_WORLD,&eps); > > EPSSetOperators(eps,m.Lpl,NULL); > > EPSSetProblemType(eps,EPS_HEP); > > EPSSetFromOptions(eps); > > > > EPSSolve(eps); //Solving for the eigenvalues > > > > EPSGetIterationNumber(eps,&its); > > PetscPrintf(PETSC_COMM_WORLD,"Number of iterations of the > method: %D\n",its); > > EPSGetType(eps,&type); > > PetscPrintf(PETSC_COMM_WORLD,"Solution Method: %s\n\n",type); > > EPSGetTolerances(eps,&get_tol,&maxit); > > PetscPrintf(PETSC_COMM_WORLD,"Stopping condition: tol=%.4g, > maxit=%D\n",(double)get_tol,maxit); > > EPSGetConverged(eps,&nconv); > > PetscPrintf(PETSC_COMM_WORLD, "Number of converged eigenpairs: > %D\n\n",nconv); > > if(nconv>0) > > { > > PetscPrintf(PETSC_COMM_WORLD, > > " k ||Ax-kx||/||kx||\n" > > "-------------- ----------------\n"); > > for(i=0;i > { > > EPSGetEigenpair(eps,i,&kr,&ki,xr,xi); > > EPSComputeError(eps,i,EPS_ > ERROR_RELATIVE,&error); > > re = PetscRealPart(kr); > > im = PetscImaginaryPart(kr); > > > > re = kr; > > im = ki; > > if(im!=0.0) { > > PetscPrintf(PETSC_COMM_WORLD," %9f%+9fi > %12g\n",(double)re,(double)im,(double)error); > > } > > else { > > PetscPrintf(PETSC_COMM_WORLD," %12f > %12g\n",(double)re,(double)error); > > } > > } > > PetscPrintf(PETSC_COMM_WORLD,"\n"); > > } > > > > EPSDestroy(&eps); > > VecDestroy(&xr); > > VecDestroy(&xi); > > } > > > > This is what I pass through my command line: > > ./RunPart -eps_type jd -eps_nev 3 -st_ksp_type cg -st_ksp_rtol 0.001 > -eps_tol 0.001 -st_pc_type bjacobi -eps_smallest_real > > > > > > *In graph partitioning, I would strongly recommend deflating the > eigenvector corresponding to the zero eigenvalue, as is done in ex11.c:* > > I have tried doing this but for some reason the eigenvalues fail > to converge. > > > > > > > > On Sun, Mar 26, 2017 at 2:41 AM, Jose E. Roman > wrote: > > > > > El 26 mar 2017, a las 6:08, Bodhisatta Pramanik > escribi?: > > > > > > Send all the output in the error message: > > > > > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > > [0]PETSC ERROR: Object is in wrong state > > > [0]PETSC ERROR: Must call EPSSolve() first: Parameter #1 > > > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/ > documentation/faq.html for trouble shooting. > > > [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 > > > [0]PETSC ERROR: ./RunPart on a arch-linux2-c-debug named > research-5.ece.iastate.edu by bodhi91 Sat Mar 25 22:33:56 2017 > > > [0]PETSC ERROR: Configure options --with-cc=gcc --with-css=g++ > -with-fc=gfortran --download-fblaslapack --download-mpich > > > [0]PETSC ERROR: #10468 EPSComputeError() line 643 in > /tmp/Bodhi/slepc-3.7.3/src/eps/interface/epssolve.c > > > 0.000000 2.0795e-317 > > > > > > > The error is in EPSComputeError(), not in EPSSolve(). Did you modify the > state of the EPS object between EPSSolve() and EPSComputeError()? > > > > In graph partitioning, I would strongly recommend deflating the > eigenvector corresponding to the zero eigenvalue, as is done in ex11.c: > > http://slepc.upv.es/documentation/current/src/eps/ > examples/tutorials/ex11.c.html > > > > Jose > > > > > > > > > > -- > > Bodhisatta Pramanik, > > Graduate Student, > > Department of Electrical and Computer Engineering, > > 301 Durham, > > Iowa State University, > > Ames,Iowa 50011, > > bodhi91 at iastate.edu > > 515-735-6300 > > > > > > > > -- > > Bodhisatta Pramanik, > > Graduate Student, > > Department of Electrical and Computer Engineering, > > 301 Durham, > > Iowa State University, > > Ames,Iowa 50011, > > bodhi91 at iastate.edu > > 515-735-6300 > > > > -- *Bodhisatta Pramanik,* *Graduate Student,* *Department of Electrical and Computer Engineering,* *301 Durham,* *Iowa State University,* *Ames,Iowa 50011,* bodhi91 at iastate.edu *515-735-6300* -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Sun Mar 26 16:06:57 2017 From: hzhang at mcs.anl.gov (Hong) Date: Sun, 26 Mar 2017 16:06:57 -0500 Subject: [petsc-users] Slepc: Computing first 4 smallest eigenvalues and eigenvectors of a large graph Laplacian In-Reply-To: References: <7A14846A-CAFA-4EFF-AC56-2326DA44D770@mcs.anl.gov> <946E0105-5927-4BF8-A2D2-D212F71C423F@mcs.anl.gov> <288CD162-9242-480C-8117-FB73214934C6@mcs.anl.gov> Message-ID: For icc and cholesky factorizations, MatSolve_SeqSBAIJ() is used for both aij and sbaij. Why it gives different flops? Hong On Sun, Mar 26, 2017 at 2:44 PM, Bodhisatta Pramanik wrote: > *The flop rate for the MatMult and MatSolve are much lower than I would > expect. Try using the AIJ matrix instead of SBAIJ.* > SeqAIJ: > MatMult = 213 Mflop/s > MatSolve = 204 Mflop/s > AIJ: > MatMult = 211 Mflop/s > MatSolve = 198 Mflop/s > SBAIJ: > MatMult = 179 Mflop/s > MatSolve = 167 Mflop/s > > Shouldn't the SBAIJ provide the best performance since it only works with > the upper triangular part of the whole large sparse symmetric matrix? > > Thanks, > Bodhi > > On Sun, Mar 26, 2017 at 12:30 PM, Barry Smith wrote: > >> >> The flop rate for the MatMult and MatSolve are much lower than I would >> expect. Try using the AIJ matrix instead of SBAIJ. >> >> >> > On Mar 26, 2017, at 6:25 AM, Bodhisatta Pramanik >> wrote: >> > >> > *First set a new PETSC_ARCH say arch-opt and ./configure with the >> additional argument --with-debugging=0 then recompile PETSc with the new >> PETSC_ARCH and recompile your code then send the new -log_view output.* >> > I have attached the new log summary with this file. The computation >> time is a lot faster now. It takes around 23 seconds to compute the first 3 >> eigenvectors of a 200K sized matrix. >> > >> > Thanks, >> > Bodhi >> > >> > On Sun, Mar 26, 2017 at 3:14 AM, Bodhisatta Pramanik < >> bodhi91 at iastate.edu> wrote: >> > *The error is in EPSComputeError(), not in EPSSolve(). Did you modify >> the state of the EPS object between EPSSolve() and EPSComputeError()?* >> > >> > I am not modifying the eps object. I have copied a part of the code >> where I am computing the eigenvectors. >> > >> > EPSCreate(PETSC_COMM_WORLD,&eps); >> > EPSSetOperators(eps,m.Lpl,NULL); >> > EPSSetProblemType(eps,EPS_HEP); >> > EPSSetFromOptions(eps); >> > >> > EPSSolve(eps); //Solving for the eigenvalues >> > >> > EPSGetIterationNumber(eps,&its); >> > PetscPrintf(PETSC_COMM_WORLD,"Number of iterations of the >> method: %D\n",its); >> > EPSGetType(eps,&type); >> > PetscPrintf(PETSC_COMM_WORLD,"Solution Method: %s\n\n",type); >> > EPSGetTolerances(eps,&get_tol,&maxit); >> > PetscPrintf(PETSC_COMM_WORLD,"Stopping condition: tol=%.4g, >> maxit=%D\n",(double)get_tol,maxit); >> > EPSGetConverged(eps,&nconv); >> > PetscPrintf(PETSC_COMM_WORLD, "Number of converged eigenpairs: >> %D\n\n",nconv); >> > if(nconv>0) >> > { >> > PetscPrintf(PETSC_COMM_WORLD, >> > " k ||Ax-kx||/||kx||\n" >> > "-------------- ----------------\n"); >> > for(i=0;i> > { >> > EPSGetEigenpair(eps,i,&kr,&ki,xr,xi); >> > EPSComputeError(eps,i,EPS_ERR >> OR_RELATIVE,&error); >> > re = PetscRealPart(kr); >> > im = PetscImaginaryPart(kr); >> > >> > re = kr; >> > im = ki; >> > if(im!=0.0) { >> > PetscPrintf(PETSC_COMM_WORLD," >> %9f%+9fi %12g\n",(double)re,(double)im,(double)error); >> > } >> > else { >> > PetscPrintf(PETSC_COMM_WORLD," >> %12f %12g\n",(double)re,(double)error); >> > } >> > } >> > PetscPrintf(PETSC_COMM_WORLD,"\n"); >> > } >> > >> > EPSDestroy(&eps); >> > VecDestroy(&xr); >> > VecDestroy(&xi); >> > } >> > >> > This is what I pass through my command line: >> > ./RunPart -eps_type jd -eps_nev 3 -st_ksp_type cg -st_ksp_rtol 0.001 >> -eps_tol 0.001 -st_pc_type bjacobi -eps_smallest_real >> > >> > >> > *In graph partitioning, I would strongly recommend deflating the >> eigenvector corresponding to the zero eigenvalue, as is done in ex11.c:* >> > I have tried doing this but for some reason the eigenvalues >> fail to converge. >> > >> > >> > >> > On Sun, Mar 26, 2017 at 2:41 AM, Jose E. Roman >> wrote: >> > >> > > El 26 mar 2017, a las 6:08, Bodhisatta Pramanik >> escribi?: >> > > >> > > Send all the output in the error message: >> > > >> > > [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> > > [0]PETSC ERROR: Object is in wrong state >> > > [0]PETSC ERROR: Must call EPSSolve() first: Parameter #1 >> > > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/d >> ocumentation/faq.html for trouble shooting. >> > > [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 >> > > [0]PETSC ERROR: ./RunPart on a arch-linux2-c-debug named >> research-5.ece.iastate.edu by bodhi91 Sat Mar 25 22:33:56 2017 >> > > [0]PETSC ERROR: Configure options --with-cc=gcc --with-css=g++ >> -with-fc=gfortran --download-fblaslapack --download-mpich >> > > [0]PETSC ERROR: #10468 EPSComputeError() line 643 in >> /tmp/Bodhi/slepc-3.7.3/src/eps/interface/epssolve.c >> > > 0.000000 2.0795e-317 >> > > >> > >> > The error is in EPSComputeError(), not in EPSSolve(). Did you modify >> the state of the EPS object between EPSSolve() and EPSComputeError()? >> > >> > In graph partitioning, I would strongly recommend deflating the >> eigenvector corresponding to the zero eigenvalue, as is done in ex11.c: >> > http://slepc.upv.es/documentation/current/src/eps/examples/ >> tutorials/ex11.c.html >> > >> > Jose >> > >> > >> > >> > >> > -- >> > Bodhisatta Pramanik, >> > Graduate Student, >> > Department of Electrical and Computer Engineering, >> > 301 Durham, >> > Iowa State University, >> > Ames,Iowa 50011, >> > bodhi91 at iastate.edu >> > 515-735-6300 >> > >> > >> > >> > -- >> > Bodhisatta Pramanik, >> > Graduate Student, >> > Department of Electrical and Computer Engineering, >> > 301 Durham, >> > Iowa State University, >> > Ames,Iowa 50011, >> > bodhi91 at iastate.edu >> > 515-735-6300 >> > >> >> > > > -- > *Bodhisatta Pramanik,* > *Graduate Student,* > *Department of Electrical and Computer Engineering,* > *301 Durham,* > *Iowa State University,* > *Ames,Iowa 50011,* > bodhi91 at iastate.edu > *515-735-6300 <(515)%20735-6300>* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Mon Mar 27 02:23:19 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Mon, 27 Mar 2017 07:23:19 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <2BE9B65A-4F9A-4F4F-9764-79EA73BA767D@mcs.anl.gov> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl> <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> <1490342727455.65990@marin.nl> <1490358872715.1@marin.nl> <1490368299379.90828@marin.nl>, <2BE9B65A-4F9A-4F4F-9764-79EA73BA767D@mcs.anl.gov> Message-ID: <1490599399080.49416@marin.nl> Barry, I removed the null space from the rhs in the debug program that I wrote to just solve Sp x = b once. In this debug program I've constructed Sp myself after reading in the four blocks from the real program. So this is independent of PCFieldSplit. Indeed I also see bad convergence when using pc_type svd for this debug program unless I remove the null space from the rhs. So far I haven't managed to translate any of this to the real program. - Setting the null space for Sp in the real program seems to work by happy accident, but Lawrence gave me the hint to use "PetscObjectCompose" to set the nullspace using is1. - I still have to understand Lawrence's hint and Matt's comment about MatSetTransposeNullSpace. - I'm not sure how to remove the null space from the rhs vector in the real pogram, since I have one rhs vector with both velocity and pressure and the null space only refers to the pressure part. Any hints? - Or should I set the null space for the velocity-pressure matrix itself, instead of the Schur complement? - Besides this, I'm also wondering why the rhs would be inconsistent in the first place, it's hard to understand from the discretization. Thanks for your reply, Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/Comfort-and-Safety-at-Sea-March-29-Rotterdam.htm ________________________________________ From: Barry Smith Sent: Saturday, March 25, 2017 1:29 AM To: Klaij, Christiaan Cc: Lawrence Mitchell; Matthew Knepley; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space > On Mar 24, 2017, at 10:11 AM, Klaij, Christiaan wrote: > > I've written a small PETSc program that loads the four blocks, > constructs Sp, attaches the null space and solves with a random > rhs vector. > > This small program replicates the same behaviour as the real > code: convergence in the preconditioned norm, stagnation in the > unpreconditioned norm. > > But when I add a call to remove the null space from the rhs > vector ("MatNullSpaceRemove"), Are you removing the null space from the original full right hand side or inside the solver for the Schur complement problem? Note that if instead of using PCFIELDSPLIT you use some other simpler PC you should also see bad convergence, do you? Even if you use -pc_type svd you should see bad convergence? > I do get convergence in both > norms! Clearly, the real code must somehow produce an > inconsistent rhs vector. So the problem is indeed somewhere else > and not in PCFieldSplit. > > Chris > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Meet-us-again-at-the-OTC-2017.htm > > ________________________________________ > From: Klaij, Christiaan > Sent: Friday, March 24, 2017 1:34 PM > To: Lawrence Mitchell; Matthew Knepley > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] left and right preconditioning with a constant null space > > I've also loaded the four blocks into matlab, computed > > Sp = A11 - A10 inv(diag(A00)) A01 > > and confirmed that Sp has indeed a constant null space. > > Chris > ________________________________________ > From: Klaij, Christiaan > Sent: Friday, March 24, 2017 9:05 AM > To: Lawrence Mitchell; Matthew Knepley > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] left and right preconditioning with a constant null space > > Lawrence, > > I think you mean "-fieldsplit_1_mat_null_space_test"? This > doesn't return any info, should it? Anyway, I've added a "call > MatNullSpaceTest" to the code which returns "true" for the null > space of A11. > > I also tried to run with "-fieldsplit_1_ksp_constant_null_space" > so that the null space is only attached to S (and not to > A11). Unfortunately, the behaviour is still the same: convergence > in the preconditioned norm only. > > Chris > ________________________________________ > From: Lawrence Mitchell > Sent: Thursday, March 23, 2017 4:52 PM > To: Klaij, Christiaan; Matthew Knepley > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] left and right preconditioning with a constant null space > > On 23/03/17 15:37, Klaij, Christiaan wrote: >> Yes, that's clearer, thanks! I do have is0 and is1 so I can try >> PetscObjectCompose and let you know. >> >> Note though that the viewer reports that both S and A11 have a >> null space attached... My matrix is a matnest and I've attached a >> null space to A11, so the latter works as expected. But is the viewer >> wrong for S? > > No, I think this is a consequence of using a matnest and attaching a > nullspace to A11. In that case you sort of "can" set a nullspace on > the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because > you just get a reference. But if you switched to AIJ then you would > no longer get this. > > So it happens that the nullspace you set on A11 /is/ transferred over > to S, but this is luck, rather than design. > > So maybe there is something else wrong. Perhaps you can run with > -fieldsplit_1_ksp_test_null_space to check the nullspace matches > correctly? > > Lawrence > From C.Klaij at marin.nl Mon Mar 27 06:26:19 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Mon, 27 Mar 2017 11:26:19 +0000 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <1490283421032.72098@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl>, <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk>, <1490283421032.72098@marin.nl> Message-ID: <1490613979001.2084@marin.nl> Lawrence, I've tried PetscObjectCompose but got the impression that it's not available in fortran. Please correct me if I'm wrong. Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl MARIN news: http://www.marin.nl/web/News/News-items/Project-Manager-Veiligheids-en-Verkeersstudies-en-Specialist-Human-Performance.htm ________________________________________ From: Klaij, Christiaan Sent: Thursday, March 23, 2017 4:37 PM To: Lawrence Mitchell; Matthew Knepley Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space Lawrence, Yes, that's clearer, thanks! I do have is0 and is1 so I can try PetscObjectCompose and let you know. Note though that the viewer reports that both S and A11 have a null space attached... My matrix is a matnest and I've attached a null space to A11, so the latter works as expected. But is the viewer wrong for S? Chris ________________________________________ From: Lawrence Mitchell Sent: Thursday, March 23, 2017 11:57 AM To: Klaij, Christiaan; Matthew Knepley Cc: petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space On 23/03/17 08:42, Klaij, Christiaan wrote: > Matt, Lawrence > > > The same problem happens when using gmres with rtol 1e-6 in the > schur complement (attachment "left_schur"). I'm not sure what > this tells us. If I understand Lawrence correctly, the null space > may be attached to the wrong matrix (A11 instead of Sp)? I think I misread the code. Because you can only attach nullspaces to either Amat or Pmat, you can't control the nullspace for (say) Amat[1,1] or Pmat[1,1] because MatCreateSubMatrix doesn't know anything about nullspaces. So the steps inside pcfieldsplit are: createsubmatrices(Amat) -> A, B, C, D setup schur matrix S <= D - C A^{-1} B Transfer nullspaces onto S. How to transfer the nullspaces? Well, as mentioned, I can't put anything on the submatrices (because I have no way of accessing them). So instead, I need to hang the nullspace on the IS that defines the S block: So if you have: is0, is1 You do: PetscObjectCompose((PetscObject)is1, "nullspace", nullspace); Before going into the preconditioner. If you're doing this through a DM, then DMCreateSubDM controls the transfer of nullspaces, the default implementation DTRT in the case of sections. See DMCreateSubDM_Section_Private. Clearer? Lawrence From bsmith at mcs.anl.gov Mon Mar 27 11:35:59 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 27 Mar 2017 11:35:59 -0500 Subject: [petsc-users] left and right preconditioning with a constant null space In-Reply-To: <1490599399080.49416@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl> <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> <1490342727455.65990@marin.nl> <1490358872715.1@marin.nl> <1490368299379.90828@marin.nl> <2BE9B65A-4F9A-4F4F-9764-79EA73BA767D@mcs.anl.gov> <1490599399080.49416@marin.nl> Message-ID: <3CBF06AA-D835-4906-A780-476823CF55B8@mcs.anl.gov> > On Mar 27, 2017, at 2:23 AM, Klaij, Christiaan wrote: > > Barry, > > I removed the null space from the rhs in the debug program that I > wrote to just solve Sp x = b once. In this debug program I've > constructed Sp myself after reading in the four blocks from the > real program. So this is independent of PCFieldSplit. Indeed I > also see bad convergence when using pc_type svd for this debug > program unless I remove the null space from the rhs. > > So far I haven't managed to translate any of this to the real > program. > > - Setting the null space for Sp in the real program seems to work > by happy accident, but Lawrence gave me the hint to > use "PetscObjectCompose" to set the nullspace using is1. > > - I still have to understand Lawrence's hint and Matt's comment > about MatSetTransposeNullSpace. > > - I'm not sure how to remove the null space from the rhs vector > in the real pogram, since I have one rhs vector with both > velocity and pressure and the null space only refers to the > pressure part. Any hints? > > - Or should I set the null space for the velocity-pressure matrix > itself, instead of the Schur complement? I would first check if the entire full velocity-pressure right hand side is consistent. If it is not you can make it consistent by removing the transpose null space. You can use MatCreateNullSpace() to create the null space by passing in a vector that is constant on all the pressure variables and 0 on the velocity variables. Barry > > - Besides this, I'm also wondering why the rhs would be > inconsistent in the first place, it's hard to understand from > the discretization. > > Thanks for your reply, > Chris > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Comfort-and-Safety-at-Sea-March-29-Rotterdam.htm > > ________________________________________ > From: Barry Smith > Sent: Saturday, March 25, 2017 1:29 AM > To: Klaij, Christiaan > Cc: Lawrence Mitchell; Matthew Knepley; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] left and right preconditioning with a constant null space > >> On Mar 24, 2017, at 10:11 AM, Klaij, Christiaan wrote: >> >> I've written a small PETSc program that loads the four blocks, >> constructs Sp, attaches the null space and solves with a random >> rhs vector. >> >> This small program replicates the same behaviour as the real >> code: convergence in the preconditioned norm, stagnation in the >> unpreconditioned norm. >> >> But when I add a call to remove the null space from the rhs >> vector ("MatNullSpaceRemove"), > > Are you removing the null space from the original full right hand side or inside the solver for the Schur complement problem? > > Note that if instead of using PCFIELDSPLIT you use some other simpler PC you should also see bad convergence, do you? Even if you use -pc_type svd you should see bad convergence? > > > >> I do get convergence in both >> norms! Clearly, the real code must somehow produce an >> inconsistent rhs vector. So the problem is indeed somewhere else >> and not in PCFieldSplit. >> >> Chris >> >> >> >> dr. ir. Christiaan Klaij | Senior Researcher | Research & Development >> MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl >> >> MARIN news: http://www.marin.nl/web/News/News-items/Meet-us-again-at-the-OTC-2017.htm >> >> ________________________________________ >> From: Klaij, Christiaan >> Sent: Friday, March 24, 2017 1:34 PM >> To: Lawrence Mitchell; Matthew Knepley >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] left and right preconditioning with a constant null space >> >> I've also loaded the four blocks into matlab, computed >> >> Sp = A11 - A10 inv(diag(A00)) A01 >> >> and confirmed that Sp has indeed a constant null space. >> >> Chris >> ________________________________________ >> From: Klaij, Christiaan >> Sent: Friday, March 24, 2017 9:05 AM >> To: Lawrence Mitchell; Matthew Knepley >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] left and right preconditioning with a constant null space >> >> Lawrence, >> >> I think you mean "-fieldsplit_1_mat_null_space_test"? This >> doesn't return any info, should it? Anyway, I've added a "call >> MatNullSpaceTest" to the code which returns "true" for the null >> space of A11. >> >> I also tried to run with "-fieldsplit_1_ksp_constant_null_space" >> so that the null space is only attached to S (and not to >> A11). Unfortunately, the behaviour is still the same: convergence >> in the preconditioned norm only. >> >> Chris >> ________________________________________ >> From: Lawrence Mitchell >> Sent: Thursday, March 23, 2017 4:52 PM >> To: Klaij, Christiaan; Matthew Knepley >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] left and right preconditioning with a constant null space >> >> On 23/03/17 15:37, Klaij, Christiaan wrote: >>> Yes, that's clearer, thanks! I do have is0 and is1 so I can try >>> PetscObjectCompose and let you know. >>> >>> Note though that the viewer reports that both S and A11 have a >>> null space attached... My matrix is a matnest and I've attached a >>> null space to A11, so the latter works as expected. But is the viewer >>> wrong for S? >> >> No, I think this is a consequence of using a matnest and attaching a >> nullspace to A11. In that case you sort of "can" set a nullspace on >> the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because >> you just get a reference. But if you switched to AIJ then you would >> no longer get this. >> >> So it happens that the nullspace you set on A11 /is/ transferred over >> to S, but this is luck, rather than design. >> >> So maybe there is something else wrong. Perhaps you can run with >> -fieldsplit_1_ksp_test_null_space to check the nullspace matches >> correctly? >> >> Lawrence >> > From mail2amneet at gmail.com Mon Mar 27 12:14:33 2017 From: mail2amneet at gmail.com (Amneet Bhalla) Date: Mon, 27 Mar 2017 10:14:33 -0700 Subject: [petsc-users] Matrix-free TS Message-ID: Hi Folks, I am thinking of trying TS module of PETSc to free myself of maintaining time stepping part of the code and to try different time integrators easily. The problem that I am considering is incompressible Navier Stokes with variable coefficients solved using projection method. The method would involve solving equations in two parts: rho u*_t - div ( sigma) = RHS( u_explicit) ---- (1) and then projecting u* onto divergence free u by solving a pressure Poisson equation (Eq 2). Here sigma = mu ( u_ij + u_ji) is the viscous stress tensor. I have custom solvers for solving (1) and (2) using geometric MG wrapped as a PCSHELL to MATSHELL obtained by discretizing (1) and (2) say by Backward Euler. Is it possible to wrap Eq (1) and (2) in TS in matrix-free form and then solve for velocity and pressure using projection method using matrix-free solvers? Thanks, -- --Amneet -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Mar 27 12:42:46 2017 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 27 Mar 2017 12:42:46 -0500 Subject: [petsc-users] Matrix-free TS In-Reply-To: References: Message-ID: On Mon, Mar 27, 2017 at 12:14 PM, Amneet Bhalla wrote: > > Hi Folks, > > I am thinking of trying TS module of PETSc to free myself of maintaining > time stepping part of the code and to try different > time integrators easily. The problem that I am considering is > incompressible > Navier Stokes with variable coefficients solved using projection method. > The method would involve solving equations in two parts: > > rho u*_t - div ( sigma) = RHS( u_explicit) ---- (1) > > and then projecting u* onto divergence free u by solving a pressure > Poisson equation (Eq 2). > > Here sigma = mu ( u_ij + u_ji) is the viscous stress > tensor. > > I have custom solvers for solving (1) and (2) using geometric MG wrapped > as > a PCSHELL to MATSHELL obtained by discretizing (1) and (2) say by > Backward Euler. > > Is it possible to wrap Eq (1) and (2) in TS in matrix-free form and then > solve for > velocity and pressure using projection method using matrix-free solvers? > I see at least two ways to do this: 1) Timestep your momentum equation, and put the projection step in TSPoststep() 2) Formulate the combined system implicitly as a DAE and then use a block-Jacobi solver for it. 1) sounds easier and matches your current code. Matt > Thanks, > -- > --Amneet > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail2amneet at gmail.com Mon Mar 27 12:56:30 2017 From: mail2amneet at gmail.com (Amneet Bhalla) Date: Mon, 27 Mar 2017 10:56:30 -0700 Subject: [petsc-users] Matrix-free TS In-Reply-To: References: Message-ID: On Mon, Mar 27, 2017 at 10:42 AM, Matthew Knepley wrote: > 1) sounds easier and matches your current code. Great! For matrix free TS, the form of F(t,u,u_t) = rho u*_t - div ( sigma) as written "on paper" and should not contain time stepping factors like (1/2) that comes from CN? Similarly for RHS = G(t,u). What about Jacobian of F (MATSHELL to solve momentum eqn) --- should that contain these factors? -- --Amneet -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Mar 27 15:03:16 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 27 Mar 2017 15:03:16 -0500 Subject: [petsc-users] Matrix-free TS In-Reply-To: References: Message-ID: <51D310F5-B421-45E1-A453-6573E95B6F41@mcs.anl.gov> > On Mar 27, 2017, at 12:56 PM, Amneet Bhalla wrote: > > > On Mon, Mar 27, 2017 at 10:42 AM, Matthew Knepley wrote: > 1) sounds easier and matches your current code. > > Great! > > For matrix free TS, the form of F(t,u,u_t) = rho u*_t - div ( sigma) as written "on paper" and should not > contain time stepping factors like (1/2) that comes from CN? Similarly for RHS = G(t,u). What about Jacobian of F (MATSHELL > to solve momentum eqn) --- should that contain these factors? None of the functions should contain factors associated with a particular time-stepping scheme. Barry > > > -- > --Amneet > > > From rpgwars at wp.pl Mon Mar 27 16:21:26 2017 From: rpgwars at wp.pl (=?ISO-8859-2?Q?=A3ukasz_Kasza?=) Date: Mon, 27 Mar 2017 23:21:26 +0200 Subject: [petsc-users] Doubts regarding MatGetSubMatrices Message-ID: <58d9825626a372.07295836@wp.pl> Dear PETSC users, Lets say that I want to use MatGetSubMatrices(Mat mat,PetscInt n,const IS irow[],const IS icol[],MatReuse scall,Mat *submat[]) and I want to get every column of the specified rows. However initially I dont know which column indexes to pass in icol, I just know that I need everything. My question is, how to implement this efficiently in parallel aij format? I could for instance pass a range of indexes from 0 to the size of the matrix, but my concern is that this way the communication cost will increase for large matrices as the request will be sent for all columns for every row in irow. Other solution would be to exchange between the processes info regarding indexes of nonzero columns and then call MatGetSubmatrices with indexes in icol only of nonzero columns. Any help much appreciated, Best Regards From hongzhang at anl.gov Mon Mar 27 16:47:17 2017 From: hongzhang at anl.gov (Zhang, Hong) Date: Mon, 27 Mar 2017 21:47:17 +0000 Subject: [petsc-users] Matrix-free TS In-Reply-To: References: Message-ID: <3A9781A9-5B0C-486B-B80B-9CD39F788355@anl.gov> The Jacobian interface has the shift parameter which I think is the 'factor' you are looking for. In the Jacobian function IJacobian(TS ts,PetscReal time,Vec X,Vec Xdot,PetscReal shift,Mat A,Mat B,void *ctx) you can save shift and time to your MatShell context, and later, use them in the shell matrix operations. For example, the Jacobian of F in your case is dF/du+shift*dF/du_t. As Barry indicated, you should not manually set this 'factor', it is determined internally by PETSc according to the time-stepping scheme you are using. Hong (Mr.) On Mar 27, 2017, at 12:56 PM, Amneet Bhalla > wrote: On Mon, Mar 27, 2017 at 10:42 AM, Matthew Knepley > wrote: 1) sounds easier and matches your current code. Great! For matrix free TS, the form of F(t,u,u_t) = rho u*_t - div ( sigma) as written "on paper" and should not contain time stepping factors like (1/2) that comes from CN? Similarly for RHS = G(t,u). What about Jacobian of F (MATSHELL to solve momentum eqn) --- should that contain these factors? -- --Amneet -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetank at gmail.com Mon Mar 27 17:13:19 2017 From: gaetank at gmail.com (Gaetan Kenway) Date: Mon, 27 Mar 2017 15:13:19 -0700 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: Austin Here is the full makefile for a code we use. The variables defined externally in a separate config file are: $(FF90) $(FF90_FLAGS) $(LIBDIR) $(PETSC_LINKER_FLAGS) $(LINKER_FLAGS) $(CGNS_LINKER_FLAGS) $(PYTHON) $(PYTHON-CONIFG) $(F2PY) (These are usually use python, python-config and f2py. You can overwrite as necessary) $(CC) $(CC_ALL_FLAGS) This essentially just mimics what f2py does automatically but we found it easier to control exactly what is going on. Essentially you are just compiling exactly as you normally an executable, but instead make a .so (with the -shared option) and including the additional .o files generated by compiling the f2py-generated wrappers. Hope this helps, Gaetan On Sat, Mar 25, 2017 at 5:38 AM, Lisandro Dalcin wrote: > > > On 22 March 2017 at 20:29, Barry Smith wrote: > >> >> Lisandro, >> >> We've had a couple questions similar to this with f2py; is there a >> way we could add to the PETSc/SLEPc makefile rules something to allow >> people to trivially use f2py without having to make their own (often >> incorrect) manual command lines? >> >> Thanks >> >> > Barry, it is quite hard and hacky to get f2py working in the general case. > I think the email from Gaetan in this thread proves my point. > > IMHO, it is easier to write a small Fortran source exposing the API to > call using ISO_C_BINDINGS, then wrap that code with the more traditional > C-based "static" tools (SWIG, Cython) or even "dynamically" with ctypes or > cffi (which use dlopen'ing). > > > > -- > Lisandro Dalcin > ============ > Research Scientist > Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) > Extreme Computing Research Center (ECRC) > King Abdullah University of Science and Technology (KAUST) > http://ecrc.kaust.edu.sa/ > > 4700 King Abdullah University of Science and Technology > al-Khawarizmi Bldg (Bldg 1), Office # 0109 > Thuwal 23955-6900, Kingdom of Saudi Arabia > http://www.kaust.edu.sa > > Office Phone: +966 12 808-0459 <+966%2012%20808%200459> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Makefile Type: application/octet-stream Size: 1680 bytes Desc: not available URL: From bsmith at mcs.anl.gov Mon Mar 27 18:56:07 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 27 Mar 2017 18:56:07 -0500 Subject: [petsc-users] Doubts regarding MatGetSubMatrices In-Reply-To: <58d9825626a372.07295836@wp.pl> References: <58d9825626a372.07295836@wp.pl> Message-ID: Use ISCreateStride() to indicate all the columns; MatGetSubMatrices_MPIAIJ handles this as a special case and never generates the integer list of all columns. Note that the resulting matrix will have the same number of columns as the original matrix. What do you want to do with this matrix? Barry So make the start of the stride 0, the step 1 and the length be the same as the number of columns in the original matrix. > On Mar 27, 2017, at 4:21 PM, ?ukasz Kasza wrote: > > > > > Dear PETSC users, > > Lets say that I want to use MatGetSubMatrices(Mat mat,PetscInt n,const IS irow[],const IS icol[],MatReuse scall,Mat *submat[]) and I want to get every column of the specified rows. However initially I dont know which column indexes to pass in icol, I just know that I need everything. > > My question is, how to implement this efficiently in parallel aij format? I could for instance pass a range of indexes from 0 to the size of the matrix, but my concern is that this way the communication cost will increase for large matrices as the request will be sent for all columns for every row in irow. Other solution would be to exchange between the processes info regarding indexes of nonzero columns and then call MatGetSubmatrices with indexes in icol only of nonzero columns. > > Any help much appreciated, > Best Regards > > From jed at jedbrown.org Mon Mar 27 22:02:47 2017 From: jed at jedbrown.org (Jed Brown) Date: Mon, 27 Mar 2017 21:02:47 -0600 Subject: [petsc-users] CMake and PETSc In-Reply-To: References: <87lgrw23zw.fsf@jedbrown.org> Message-ID: <87lgrqrtm0.fsf@jedbrown.org> Hom Nath Gharti writes: > Thanks, Jed! I will try. I see that FindPETSc.cmake has following lines: > > set(PETSC_VALID_COMPONENTS > C > CXX) > > Should we add FC or similar? You could, but then you'd have to also add test code for that language binding. (All this does is a test for whether the library works when called from that language.) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From vitse at lmt.ens-cachan.fr Tue Mar 28 03:46:12 2017 From: vitse at lmt.ens-cachan.fr (vitse at lmt.ens-cachan.fr) Date: Tue, 28 Mar 2017 10:46:12 +0200 Subject: [petsc-users] Problems running KSP ex59 Message-ID: <20170328104612.70993st1hsapvvmc@webmail.ens-cachan.fr> Hi, I'm a fairly new PETSc user and I've been trying to look at ex59 for the last couple of days but I can't seem to have it working, nor can I really understand what's wrong. Maybe there are some options missing in the configuration? I'm also not sure wether the given warning is important or not during the make step... You'll find below both the make and run output, thanks in advance for the help. Matt * Here's the make output: vitse at sauternes:/ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials$ make ex59 *********************W-a-r-n-i-n-g************************* Your PETSC_DIR may not match the directory you are in PETSC_DIR /u/vitse/Documents/softs/petsc-3.7.5 Current directory /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials Ignore this if you are running make test ****************************************************** /u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/bin/mpicc -o ex59.o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fvisibility=hidden -g3 -I/u/vitse/Documents/softs/petsc-3.7.5/include -I/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/include `pwd`/ex59.c /u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fvisibility=hidden -g3 -o ex59 ex59.o -Wl,-rpath,/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/lib -L/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/lib -Wl,-rpath,/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/6 -L/usr/lib/gcc/x86_64-linux-gnu/6 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lpetsc -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lparmetis -lmetis -lscalapack -lHYPRE -lmpicxx -lstdc++ -lm -lflapack -lfblas -lptesmumps -lptscotch -lptscotcherr -lscotch -lscotcherr -lX11 -lhwloc -lpthread -lm -lmpifort -lgfortran -lm -lgfortran -lm -lquadmath -lmpicxx -lstdc++ -lm -lrt -lm -lpthread -lz -Wl,-rpath,/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/lib -L/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/6 -L/usr/lib/gcc/x86_64-linux-gnu/6 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl -Wl,-rpath,/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/lib -lmpi -lgcc_s -ldl /bin/rm -f ex59.o * and here's what I get when I run the test: vitse at sauternes:/ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials$ mpiexec -n 4 ex59 -npx 2 -npy 2 -nex 2 -ney 2 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: [0]PETSC ERROR: This is not a uniprocessor test [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown [0]PETSC ERROR: ex59 on a linux-gnu-c-debug named sauternes by vitse Tue Mar 28 09:59:10 2017 [0]PETSC ERROR: Configure options PETSC_DIR=/u/vitse/Documents/softs/petsc-3.7.5 PETSC_ARCH=linux-gnu-c-debug --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack=1 --download-mpich=1 --download-superflu-dist=1 --download-parmetis=1 --download-metis=1 --download-ptscotch=1 --with-pcbddc=1 --download-hypre=1 --download-scalapack=1 --download-mumps=1 --with-pcbddc=1 [0]PETSC ERROR: #1 InitializeDomainData() line 934 in /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c [0]PETSC ERROR: #2 main() line 1009 in /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -nex 2 [0]PETSC ERROR: -ney 2 [0]PETSC ERROR: -npx 2 [0]PETSC ERROR: -npy 2 [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 ------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code.. Per user-direction, the job has been aborted. ------------------------------------------------------- [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: [0]PETSC ERROR: This is not a uniprocessor test [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown [0]PETSC ERROR: ex59 on a linux-gnu-c-debug named sauternes by vitse Tue Mar 28 09:59:10 2017 [0]PETSC ERROR: Configure options PETSC_DIR=/u/vitse/Documents/softs/petsc-3.7.5 PETSC_ARCH=linux-gnu-c-debug --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack=1 --download-mpich=1 --download-superflu-dist=1 --download-parmetis=1 --download-metis=1 --download-ptscotch=1 --with-pcbddc=1 --download-hypre=1 --download-scalapack=1 --download-mumps=1 --with-pcbddc=1 [0]PETSC ERROR: #1 InitializeDomainData() line 934 in /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c [0]PETSC ERROR: #2 main() line 1009 in /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -nex 2 [0]PETSC ERROR: -ney 2 [0]PETSC ERROR: -npx 2 [0]PETSC ERROR: -npy 2 [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: [0]PETSC ERROR: This is not a uniprocessor test [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown [0]PETSC ERROR: ex59 on a linux-gnu-c-debug named sauternes by vitse Tue Mar 28 09:59:10 2017 [0]PETSC ERROR: Configure options PETSC_DIR=/u/vitse/Documents/softs/petsc-3.7.5 PETSC_ARCH=linux-gnu-c-debug --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack=1 --download-mpich=1 --download-superflu-dist=1 --download-parmetis=1 --download-metis=1 --download-ptscotch=1 --with-pcbddc=1 --download-hypre=1 --download-scalapack=1 --download-mumps=1 --with-pcbddc=1 [0]PETSC ERROR: #1 InitializeDomainData() line 934 in /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c [0]PETSC ERROR: #2 main() line 1009 in /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -nex 2 [0]PETSC ERROR: -ney 2 [0]PETSC ERROR: -npx 2 [0]PETSC ERROR: -npy 2 [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: [0]PETSC ERROR: This is not a uniprocessor test [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown [0]PETSC ERROR: ex59 on a linux-gnu-c-debug named sauternes by vitse Tue Mar 28 09:59:10 2017 [0]PETSC ERROR: Configure options PETSC_DIR=/u/vitse/Documents/softs/petsc-3.7.5 PETSC_ARCH=linux-gnu-c-debug --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack=1 --download-mpich=1 --download-superflu-dist=1 --download-parmetis=1 --download-metis=1 --download-ptscotch=1 --with-pcbddc=1 --download-hypre=1 --download-scalapack=1 --download-mumps=1 --with-pcbddc=1 [0]PETSC ERROR: #1 InitializeDomainData() line 934 in /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c [0]PETSC ERROR: #2 main() line 1009 in /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -nex 2 [0]PETSC ERROR: -ney 2 [0]PETSC ERROR: -npx 2 [0]PETSC ERROR: -npy 2 [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 [unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 -------------------------------------------------------------------------- mpiexec detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[40774,1],1] Exit code: 83 -------------------------------------------------------------------------- From knepley at gmail.com Tue Mar 28 08:12:25 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 28 Mar 2017 08:12:25 -0500 Subject: [petsc-users] Problems running KSP ex59 In-Reply-To: <20170328104612.70993st1hsapvvmc@webmail.ens-cachan.fr> References: <20170328104612.70993st1hsapvvmc@webmail.ens-cachan.fr> Message-ID: On Tue, Mar 28, 2017 at 3:46 AM, wrote: > Hi, > > I'm a fairly new PETSc user and I've been trying to look at ex59 for the > last couple of days but I can't seem to have it working, nor can I really > understand what's wrong. Maybe there are some options missing in the > configuration? I'm also not sure wether the given warning is important or > not during the make step... > > You'll find below both the make and run output, thanks in advance for the > help. > > Matt > > > > * Here's the make output: > > vitse at sauternes:/ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials$ > make ex59 > *********************W-a-r-n-i-n-g************************* > Your PETSC_DIR may not match the directory you are in > PETSC_DIR /u/vitse/Documents/softs/petsc-3.7.5 Current directory > /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials > Ignore this if you are running make test > ****************************************************** > /u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/bin/mpicc -o > ex59.o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas > -fvisibility=hidden -g3 -I/u/vitse/Documents/softs/petsc-3.7.5/include > -I/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/include > `pwd`/ex59.c > /u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/bin/mpicc -Wall > -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas > -fvisibility=hidden -g3 -o ex59 ex59.o -Wl,-rpath,/u/vitse/Documents/ > softs/petsc-3.7.5/linux-gnu-c-debug/lib -L/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/lib > -Wl,-rpath,/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/6 -L/usr/lib/gcc/x86_64-linux-gnu/6 > -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu > -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -lpetsc -lcmumps > -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lparmetis -lmetis > -lscalapack -lHYPRE -lmpicxx -lstdc++ -lm -lflapack -lfblas -lptesmumps > -lptscotch -lptscotcherr -lscotch -lscotcherr -lX11 -lhwloc -lpthread -lm > -lmpifort -lgfortran -lm -lgfortran -lm -lquadmath -lmpicxx -lstdc++ -lm > -lrt -lm -lpthread -lz -Wl,-rpath,/u/vitse/Documents/ > softs/petsc-3.7.5/linux-gnu-c-debug/lib -L/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/lib > -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/6 -L/usr/lib/gcc/x86_64-linux-gnu/6 > -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu > -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu > -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -ldl > -Wl,-rpath,/u/vitse/Documents/softs/petsc-3.7.5/linux-gnu-c-debug/lib > -lmpi -lgcc_s -ldl > /bin/rm -f ex59.o > > > * and here's what I get when I run the test: > > vitse at sauternes:/ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/tutorials$ > mpiexec -n 4 ex59 -npx 2 -npy 2 -nex 2 -ney 2 > This is likely to happen if 'mpiexec' in your path does not match the MPI that you compiled PETSc with. If you configured with --download-mpich or something like that, then use $PETSC_DIR/$PETSC_ARCH/bin/mpiexec Thanks, Matt > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: > [0]PETSC ERROR: This is not a uniprocessor test > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown > [0]PETSC ERROR: ex59 on a linux-gnu-c-debug named sauternes by vitse Tue > Mar 28 09:59:10 2017 > [0]PETSC ERROR: Configure options PETSC_DIR=/u/vitse/Documents/softs/petsc-3.7.5 > PETSC_ARCH=linux-gnu-c-debug --with-cc=gcc --with-cxx=g++ > --with-fc=gfortran --download-fblaslapack=1 --download-mpich=1 > --download-superflu-dist=1 --download-parmetis=1 --download-metis=1 > --download-ptscotch=1 --with-pcbddc=1 --download-hypre=1 > --download-scalapack=1 --download-mumps=1 --with-pcbddc=1 > [0]PETSC ERROR: #1 InitializeDomainData() line 934 in > /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/ > tutorials/ex59.c > [0]PETSC ERROR: #2 main() line 1009 in /ul/vitse/Documents/softs/pets > c-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -nex 2 > [0]PETSC ERROR: -ney 2 > [0]PETSC ERROR: -npx 2 > [0]PETSC ERROR: -npy 2 > [0]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov---------- > application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 > [unset]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 > ------------------------------------------------------- > Primary job terminated normally, but 1 process returned > a non-zero exit code.. Per user-direction, the job has been aborted. > ------------------------------------------------------- > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: > [0]PETSC ERROR: This is not a uniprocessor test > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown > [0]PETSC ERROR: ex59 on a linux-gnu-c-debug named sauternes by vitse Tue > Mar 28 09:59:10 2017 > [0]PETSC ERROR: Configure options PETSC_DIR=/u/vitse/Documents/softs/petsc-3.7.5 > PETSC_ARCH=linux-gnu-c-debug --with-cc=gcc --with-cxx=g++ > --with-fc=gfortran --download-fblaslapack=1 --download-mpich=1 > --download-superflu-dist=1 --download-parmetis=1 --download-metis=1 > --download-ptscotch=1 --with-pcbddc=1 --download-hypre=1 > --download-scalapack=1 --download-mumps=1 --with-pcbddc=1 > [0]PETSC ERROR: #1 InitializeDomainData() line 934 in > /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/ > tutorials/ex59.c > [0]PETSC ERROR: #2 main() line 1009 in /ul/vitse/Documents/softs/pets > c-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -nex 2 > [0]PETSC ERROR: -ney 2 > [0]PETSC ERROR: -npx 2 > [0]PETSC ERROR: -npy 2 > [0]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov---------- > application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 > [unset]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: > [0]PETSC ERROR: This is not a uniprocessor test > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown > [0]PETSC ERROR: ex59 on a linux-gnu-c-debug named sauternes by vitse Tue > Mar 28 09:59:10 2017 > [0]PETSC ERROR: Configure options PETSC_DIR=/u/vitse/Documents/softs/petsc-3.7.5 > PETSC_ARCH=linux-gnu-c-debug --with-cc=gcc --with-cxx=g++ > --with-fc=gfortran --download-fblaslapack=1 --download-mpich=1 > --download-superflu-dist=1 --download-parmetis=1 --download-metis=1 > --download-ptscotch=1 --with-pcbddc=1 --download-hypre=1 > --download-scalapack=1 --download-mumps=1 --with-pcbddc=1 > [0]PETSC ERROR: #1 InitializeDomainData() line 934 in > /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/ > tutorials/ex59.c > [0]PETSC ERROR: #2 main() line 1009 in /ul/vitse/Documents/softs/pets > c-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -nex 2 > [0]PETSC ERROR: -ney 2 > [0]PETSC ERROR: -npx 2 > [0]PETSC ERROR: -npy 2 > [0]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov---------- > application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 > [unset]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: > [0]PETSC ERROR: This is not a uniprocessor test > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.5, unknown > [0]PETSC ERROR: ex59 on a linux-gnu-c-debug named sauternes by vitse Tue > Mar 28 09:59:10 2017 > [0]PETSC ERROR: Configure options PETSC_DIR=/u/vitse/Documents/softs/petsc-3.7.5 > PETSC_ARCH=linux-gnu-c-debug --with-cc=gcc --with-cxx=g++ > --with-fc=gfortran --download-fblaslapack=1 --download-mpich=1 > --download-superflu-dist=1 --download-parmetis=1 --download-metis=1 > --download-ptscotch=1 --with-pcbddc=1 --download-hypre=1 > --download-scalapack=1 --download-mumps=1 --with-pcbddc=1 > [0]PETSC ERROR: #1 InitializeDomainData() line 934 in > /ul/vitse/Documents/softs/petsc-3.7.5/src/ksp/ksp/examples/ > tutorials/ex59.c > [0]PETSC ERROR: #2 main() line 1009 in /ul/vitse/Documents/softs/pets > c-3.7.5/src/ksp/ksp/examples/tutorials/ex59.c > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -nex 2 > [0]PETSC ERROR: -ney 2 > [0]PETSC ERROR: -npx 2 > [0]PETSC ERROR: -npy 2 > [0]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov---------- > application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 > [unset]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 83) - process 0 > -------------------------------------------------------------------------- > mpiexec detected that one or more processes exited with non-zero status, > thus causing > the job to be terminated. The first process to do so was: > > Process name: [[40774,1],1] > Exit code: 83 > -------------------------------------------------------------------------- > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Tue Mar 28 08:19:07 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Tue, 28 Mar 2017 13:19:07 +0000 Subject: [petsc-users] Fw: left and right preconditioning with a constant null space In-Reply-To: <1490692259748.29949@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl> <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> <1490342727455.65990@marin.nl> <1490358872715.1@marin.nl> <1490368299379.90828@marin.nl> <2BE9B65A-4F9A-4F4F-9764-79EA73BA767D@mcs.anl.gov> <1490599399080.49416@marin.nl>, <3CBF06AA-D835-4906-A780-476823CF55B8@mcs.anl.gov>, <1490692259748.29949@marin.nl> Message-ID: <1490707147172.1897@marin.nl> Barry, That seems by far the best way to proceed! As a user I'm responsible for the velocity-pressure matrix and its null space, all the rest is up to PCFieldSplit. But unfortunately it doesn't work: I've constructed the null space [u,p]=[0,1], attached it to the velocity-pressure matrix and verified it by MatNullSpaceTest. I'm making sure the rhs is consistent with "MatNullSpaceRemove". However, the null space doesn't seem to propagate to the Schur complement, which therefore doesn't converge, see attachment "out1". When I attach the constant null space directly to A11, it does reach the Schur complement and I do get convergence, see attachment "out2". Chris ________________________________________ From: Barry Smith Sent: Monday, March 27, 2017 6:35 PM To: Klaij, Christiaan Cc: Lawrence Mitchell; Matthew Knepley; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space > On Mar 27, 2017, at 2:23 AM, Klaij, Christiaan wrote: > > Barry, > > I removed the null space from the rhs in the debug program that I > wrote to just solve Sp x = b once. In this debug program I've > constructed Sp myself after reading in the four blocks from the > real program. So this is independent of PCFieldSplit. Indeed I > also see bad convergence when using pc_type svd for this debug > program unless I remove the null space from the rhs. > > So far I haven't managed to translate any of this to the real > program. > > - Setting the null space for Sp in the real program seems to work > by happy accident, but Lawrence gave me the hint to > use "PetscObjectCompose" to set the nullspace using is1. > > - I still have to understand Lawrence's hint and Matt's comment > about MatSetTransposeNullSpace. > > - I'm not sure how to remove the null space from the rhs vector > in the real pogram, since I have one rhs vector with both > velocity and pressure and the null space only refers to the > pressure part. Any hints? > > - Or should I set the null space for the velocity-pressure matrix > itself, instead of the Schur complement? I would first check if the entire full velocity-pressure right hand side is consistent. If it is not you can make it consistent by removing the transpose null space. You can use MatCreateNullSpace() to create the null space by passing in a vector that is constant on all the pressure variables and 0 on the velocity variables. Barry > > - Besides this, I'm also wondering why the rhs would be > inconsistent in the first place, it's hard to understand from > the discretization. > > Thanks for your reply, > Chris > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Comfort-and-Safety-at-Sea-March-29-Rotterdam.htm > > ________________________________________ > From: Barry Smith > Sent: Saturday, March 25, 2017 1:29 AM > To: Klaij, Christiaan > Cc: Lawrence Mitchell; Matthew Knepley; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] left and right preconditioning with a constant null space > >> On Mar 24, 2017, at 10:11 AM, Klaij, Christiaan wrote: >> >> I've written a small PETSc program that loads the four blocks, >> constructs Sp, attaches the null space and solves with a random >> rhs vector. >> >> This small program replicates the same behaviour as the real >> code: convergence in the preconditioned norm, stagnation in the >> unpreconditioned norm. >> >> But when I add a call to remove the null space from the rhs >> vector ("MatNullSpaceRemove"), > > Are you removing the null space from the original full right hand side or inside the solver for the Schur complement problem? > > Note that if instead of using PCFIELDSPLIT you use some other simpler PC you should also see bad convergence, do you? Even if you use -pc_type svd you should see bad convergence? > > > >> I do get convergence in both >> norms! Clearly, the real code must somehow produce an >> inconsistent rhs vector. So the problem is indeed somewhere else >> and not in PCFieldSplit. >> >> Chris >> >> >> >> dr. ir. Christiaan Klaij | Senior Researcher | Research & Development >> MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl >> >> MARIN news: http://www.marin.nl/web/News/News-items/Meet-us-again-at-the-OTC-2017.htm >> >> ________________________________________ >> From: Klaij, Christiaan >> Sent: Friday, March 24, 2017 1:34 PM >> To: Lawrence Mitchell; Matthew Knepley >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] left and right preconditioning with a constant null space >> >> I've also loaded the four blocks into matlab, computed >> >> Sp = A11 - A10 inv(diag(A00)) A01 >> >> and confirmed that Sp has indeed a constant null space. >> >> Chris >> ________________________________________ >> From: Klaij, Christiaan >> Sent: Friday, March 24, 2017 9:05 AM >> To: Lawrence Mitchell; Matthew Knepley >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] left and right preconditioning with a constant null space >> >> Lawrence, >> >> I think you mean "-fieldsplit_1_mat_null_space_test"? This >> doesn't return any info, should it? Anyway, I've added a "call >> MatNullSpaceTest" to the code which returns "true" for the null >> space of A11. >> >> I also tried to run with "-fieldsplit_1_ksp_constant_null_space" >> so that the null space is only attached to S (and not to >> A11). Unfortunately, the behaviour is still the same: convergence >> in the preconditioned norm only. >> >> Chris >> ________________________________________ >> From: Lawrence Mitchell >> Sent: Thursday, March 23, 2017 4:52 PM >> To: Klaij, Christiaan; Matthew Knepley >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] left and right preconditioning with a constant null space >> >> On 23/03/17 15:37, Klaij, Christiaan wrote: >>> Yes, that's clearer, thanks! I do have is0 and is1 so I can try >>> PetscObjectCompose and let you know. >>> >>> Note though that the viewer reports that both S and A11 have a >>> null space attached... My matrix is a matnest and I've attached a >>> null space to A11, so the latter works as expected. But is the viewer >>> wrong for S? >> >> No, I think this is a consequence of using a matnest and attaching a >> nullspace to A11. In that case you sort of "can" set a nullspace on >> the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because >> you just get a reference. But if you switched to AIJ then you would >> no longer get this. >> >> So it happens that the nullspace you set on A11 /is/ transferred over >> to S, but this is luck, rather than design. >> >> So maybe there is something else wrong. Perhaps you can run with >> -fieldsplit_1_ksp_test_null_space to check the nullspace matches >> correctly? >> >> Lawrence >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: out1 Type: application/octet-stream Size: 15046 bytes Desc: out1 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: out2 Type: application/octet-stream Size: 13356 bytes Desc: out2 URL: From knepley at gmail.com Tue Mar 28 08:27:59 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 28 Mar 2017 08:27:59 -0500 Subject: [petsc-users] Fw: left and right preconditioning with a constant null space In-Reply-To: <1490707147172.1897@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl> <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> <1490342727455.65990@marin.nl> <1490358872715.1@marin.nl> <1490368299379.90828@marin.nl> <2BE9B65A-4F9A-4F4F-9764-79EA73BA767D@mcs.anl.gov> <1490599399080.49416@marin.nl> <3CBF06AA-D835-4906-A780-476823CF55B8@mcs.anl.gov> <1490692259748.29949@marin.nl> <1490707147172.1897@marin.nl> Message-ID: On Tue, Mar 28, 2017 at 8:19 AM, Klaij, Christiaan wrote: > Barry, > > That seems by far the best way to proceed! As a user I'm > responsible for the velocity-pressure matrix and its null space, > all the rest is up to PCFieldSplit. But unfortunately it doesn't > work: > > I've constructed the null space [u,p]=[0,1], attached it to the > velocity-pressure matrix and verified it by MatNullSpaceTest. I'm > making sure the rhs is consistent with "MatNullSpaceRemove". > > However, the null space doesn't seem to propagate to the Schur > complement, which therefore doesn't converge, see > attachment "out1". > > When I attach the constant null space directly to A11, it does > reach the Schur complement and I do get convergence, see > attachment "out2". > So you attach a null space vector to the large matrix, and have a consistent rhs? This is not quite what we want. If you a) Had a consistent rhs and attached the constant nullspace vector to the pressure IS, then things will work b) Had a consistent rhs and attached the constant nullspace vector to the "field" object from a DM, it should work c) Attached the global nullspace vector to A^T and the constant nullspace to the pressure IS, it should work We can't really pull apart the global null vector because there is no guarantee that its the nullspace of the submatrix. Thanks, Matt > Chris > > ________________________________________ > From: Barry Smith > Sent: Monday, March 27, 2017 6:35 PM > To: Klaij, Christiaan > Cc: Lawrence Mitchell; Matthew Knepley; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] left and right preconditioning with a constant > null space > > > On Mar 27, 2017, at 2:23 AM, Klaij, Christiaan wrote: > > > > Barry, > > > > I removed the null space from the rhs in the debug program that I > > wrote to just solve Sp x = b once. In this debug program I've > > constructed Sp myself after reading in the four blocks from the > > real program. So this is independent of PCFieldSplit. Indeed I > > also see bad convergence when using pc_type svd for this debug > > program unless I remove the null space from the rhs. > > > > So far I haven't managed to translate any of this to the real > > program. > > > > - Setting the null space for Sp in the real program seems to work > > by happy accident, but Lawrence gave me the hint to > > use "PetscObjectCompose" to set the nullspace using is1. > > > > - I still have to understand Lawrence's hint and Matt's comment > > about MatSetTransposeNullSpace. > > > > - I'm not sure how to remove the null space from the rhs vector > > in the real pogram, since I have one rhs vector with both > > velocity and pressure and the null space only refers to the > > pressure part. Any hints? > > > > - Or should I set the null space for the velocity-pressure matrix > > itself, instead of the Schur complement? > > I would first check if the entire full velocity-pressure right hand > side is consistent. If it is not you can make it consistent by removing the > transpose null space. You can use MatCreateNullSpace() to create the null > space by passing in a vector that is constant on all the pressure variables > and 0 on the velocity variables. > > Barry > > > > > - Besides this, I'm also wondering why the rhs would be > > inconsistent in the first place, it's hard to understand from > > the discretization. > > > > Thanks for your reply, > > Chris > > > > > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | > http://www.marin.nl > > > > MARIN news: http://www.marin.nl/web/News/News-items/Comfort-and-Safety- > at-Sea-March-29-Rotterdam.htm > > > > ________________________________________ > > From: Barry Smith > > Sent: Saturday, March 25, 2017 1:29 AM > > To: Klaij, Christiaan > > Cc: Lawrence Mitchell; Matthew Knepley; petsc-users at mcs.anl.gov > > Subject: Re: [petsc-users] left and right preconditioning with a > constant null space > > > >> On Mar 24, 2017, at 10:11 AM, Klaij, Christiaan > wrote: > >> > >> I've written a small PETSc program that loads the four blocks, > >> constructs Sp, attaches the null space and solves with a random > >> rhs vector. > >> > >> This small program replicates the same behaviour as the real > >> code: convergence in the preconditioned norm, stagnation in the > >> unpreconditioned norm. > >> > >> But when I add a call to remove the null space from the rhs > >> vector ("MatNullSpaceRemove"), > > > > Are you removing the null space from the original full right hand side > or inside the solver for the Schur complement problem? > > > > Note that if instead of using PCFIELDSPLIT you use some other simpler > PC you should also see bad convergence, do you? Even if you use -pc_type > svd you should see bad convergence? > > > > > > > >> I do get convergence in both > >> norms! Clearly, the real code must somehow produce an > >> inconsistent rhs vector. So the problem is indeed somewhere else > >> and not in PCFieldSplit. > >> > >> Chris > >> > >> > >> > >> dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > >> MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | > http://www.marin.nl > >> > >> MARIN news: http://www.marin.nl/web/News/News-items/Meet-us-again-at- > the-OTC-2017.htm > >> > >> ________________________________________ > >> From: Klaij, Christiaan > >> Sent: Friday, March 24, 2017 1:34 PM > >> To: Lawrence Mitchell; Matthew Knepley > >> Cc: petsc-users at mcs.anl.gov > >> Subject: Re: [petsc-users] left and right preconditioning with a > constant null space > >> > >> I've also loaded the four blocks into matlab, computed > >> > >> Sp = A11 - A10 inv(diag(A00)) A01 > >> > >> and confirmed that Sp has indeed a constant null space. > >> > >> Chris > >> ________________________________________ > >> From: Klaij, Christiaan > >> Sent: Friday, March 24, 2017 9:05 AM > >> To: Lawrence Mitchell; Matthew Knepley > >> Cc: petsc-users at mcs.anl.gov > >> Subject: Re: [petsc-users] left and right preconditioning with a > constant null space > >> > >> Lawrence, > >> > >> I think you mean "-fieldsplit_1_mat_null_space_test"? This > >> doesn't return any info, should it? Anyway, I've added a "call > >> MatNullSpaceTest" to the code which returns "true" for the null > >> space of A11. > >> > >> I also tried to run with "-fieldsplit_1_ksp_constant_null_space" > >> so that the null space is only attached to S (and not to > >> A11). Unfortunately, the behaviour is still the same: convergence > >> in the preconditioned norm only. > >> > >> Chris > >> ________________________________________ > >> From: Lawrence Mitchell > >> Sent: Thursday, March 23, 2017 4:52 PM > >> To: Klaij, Christiaan; Matthew Knepley > >> Cc: petsc-users at mcs.anl.gov > >> Subject: Re: [petsc-users] left and right preconditioning with a > constant null space > >> > >> On 23/03/17 15:37, Klaij, Christiaan wrote: > >>> Yes, that's clearer, thanks! I do have is0 and is1 so I can try > >>> PetscObjectCompose and let you know. > >>> > >>> Note though that the viewer reports that both S and A11 have a > >>> null space attached... My matrix is a matnest and I've attached a > >>> null space to A11, so the latter works as expected. But is the viewer > >>> wrong for S? > >> > >> No, I think this is a consequence of using a matnest and attaching a > >> nullspace to A11. In that case you sort of "can" set a nullspace on > >> the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because > >> you just get a reference. But if you switched to AIJ then you would > >> no longer get this. > >> > >> So it happens that the nullspace you set on A11 /is/ transferred over > >> to S, but this is luck, rather than design. > >> > >> So maybe there is something else wrong. Perhaps you can run with > >> -fieldsplit_1_ksp_test_null_space to check the nullspace matches > >> correctly? > >> > >> Lawrence > >> > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From C.Klaij at marin.nl Tue Mar 28 08:54:43 2017 From: C.Klaij at marin.nl (Klaij, Christiaan) Date: Tue, 28 Mar 2017 13:54:43 +0000 Subject: [petsc-users] Fw: left and right preconditioning with a constant null space In-Reply-To: References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl> <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> <1490342727455.65990@marin.nl> <1490358872715.1@marin.nl> <1490368299379.90828@marin.nl> <2BE9B65A-4F9A-4F4F-9764-79EA73BA767D@mcs.anl.gov> <1490599399080.49416@marin.nl> <3CBF06AA-D835-4906-A780-476823CF55B8@mcs.anl.gov> <1490692259748.29949@marin.nl> <1490707147172.1897@marin.nl>, Message-ID: <1490709283484.12325@marin.nl> Matt, Yes, null space vector attached to the large matrix and consistent rhs. This seems to be what Barry wants (or I misunderstood his previous email) a) that was Lawrence's suggestion as well, using petscObjectCompose, but that doesn't seem to work in fortran as I reported earlier. b) Good to know, but so far I don't have a DM. c) same problem as a) I understand your last point about pulling apart the global null vector. Then again how would a user know the null space of any submatrices that arise somewhere within PCFieldSplit? Chris dr. ir. Christiaan Klaij | Senior Researcher | Research & Development MARIN | T +31 317 49 33 44 | C.Klaij at marin.nl | www.marin.nl [LinkedIn] [YouTube] [Twitter] [Facebook] MARIN news: Meet us again at the OTC 2017 ________________________________ From: Matthew Knepley Sent: Tuesday, March 28, 2017 3:27 PM To: Klaij, Christiaan Cc: Lawrence Mitchell; petsc-users at mcs.anl.gov Subject: Re: Fw: [petsc-users] left and right preconditioning with a constant null space On Tue, Mar 28, 2017 at 8:19 AM, Klaij, Christiaan > wrote: Barry, That seems by far the best way to proceed! As a user I'm responsible for the velocity-pressure matrix and its null space, all the rest is up to PCFieldSplit. But unfortunately it doesn't work: I've constructed the null space [u,p]=[0,1], attached it to the velocity-pressure matrix and verified it by MatNullSpaceTest. I'm making sure the rhs is consistent with "MatNullSpaceRemove". However, the null space doesn't seem to propagate to the Schur complement, which therefore doesn't converge, see attachment "out1". When I attach the constant null space directly to A11, it does reach the Schur complement and I do get convergence, see attachment "out2". So you attach a null space vector to the large matrix, and have a consistent rhs? This is not quite what we want. If you a) Had a consistent rhs and attached the constant nullspace vector to the pressure IS, then things will work b) Had a consistent rhs and attached the constant nullspace vector to the "field" object from a DM, it should work c) Attached the global nullspace vector to A^T and the constant nullspace to the pressure IS, it should work We can't really pull apart the global null vector because there is no guarantee that its the nullspace of the submatrix. Thanks, Matt Chris ________________________________________ From: Barry Smith > Sent: Monday, March 27, 2017 6:35 PM To: Klaij, Christiaan Cc: Lawrence Mitchell; Matthew Knepley; petsc-users at mcs.anl.gov Subject: Re: [petsc-users] left and right preconditioning with a constant null space > On Mar 27, 2017, at 2:23 AM, Klaij, Christiaan > wrote: > > Barry, > > I removed the null space from the rhs in the debug program that I > wrote to just solve Sp x = b once. In this debug program I've > constructed Sp myself after reading in the four blocks from the > real program. So this is independent of PCFieldSplit. Indeed I > also see bad convergence when using pc_type svd for this debug > program unless I remove the null space from the rhs. > > So far I haven't managed to translate any of this to the real > program. > > - Setting the null space for Sp in the real program seems to work > by happy accident, but Lawrence gave me the hint to > use "PetscObjectCompose" to set the nullspace using is1. > > - I still have to understand Lawrence's hint and Matt's comment > about MatSetTransposeNullSpace. > > - I'm not sure how to remove the null space from the rhs vector > in the real pogram, since I have one rhs vector with both > velocity and pressure and the null space only refers to the > pressure part. Any hints? > > - Or should I set the null space for the velocity-pressure matrix > itself, instead of the Schur complement? I would first check if the entire full velocity-pressure right hand side is consistent. If it is not you can make it consistent by removing the transpose null space. You can use MatCreateNullSpace() to create the null space by passing in a vector that is constant on all the pressure variables and 0 on the velocity variables. Barry > > - Besides this, I'm also wondering why the rhs would be > inconsistent in the first place, it's hard to understand from > the discretization. > > Thanks for your reply, > Chris > > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl > > MARIN news: http://www.marin.nl/web/News/News-items/Comfort-and-Safety-at-Sea-March-29-Rotterdam.htm > > ________________________________________ > From: Barry Smith > > Sent: Saturday, March 25, 2017 1:29 AM > To: Klaij, Christiaan > Cc: Lawrence Mitchell; Matthew Knepley; petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] left and right preconditioning with a constant null space > >> On Mar 24, 2017, at 10:11 AM, Klaij, Christiaan > wrote: >> >> I've written a small PETSc program that loads the four blocks, >> constructs Sp, attaches the null space and solves with a random >> rhs vector. >> >> This small program replicates the same behaviour as the real >> code: convergence in the preconditioned norm, stagnation in the >> unpreconditioned norm. >> >> But when I add a call to remove the null space from the rhs >> vector ("MatNullSpaceRemove"), > > Are you removing the null space from the original full right hand side or inside the solver for the Schur complement problem? > > Note that if instead of using PCFIELDSPLIT you use some other simpler PC you should also see bad convergence, do you? Even if you use -pc_type svd you should see bad convergence? > > > >> I do get convergence in both >> norms! Clearly, the real code must somehow produce an >> inconsistent rhs vector. So the problem is indeed somewhere else >> and not in PCFieldSplit. >> >> Chris >> >> >> >> dr. ir. Christiaan Klaij | Senior Researcher | Research & Development >> MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | http://www.marin.nl >> >> MARIN news: http://www.marin.nl/web/News/News-items/Meet-us-again-at-the-OTC-2017.htm >> >> ________________________________________ >> From: Klaij, Christiaan >> Sent: Friday, March 24, 2017 1:34 PM >> To: Lawrence Mitchell; Matthew Knepley >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] left and right preconditioning with a constant null space >> >> I've also loaded the four blocks into matlab, computed >> >> Sp = A11 - A10 inv(diag(A00)) A01 >> >> and confirmed that Sp has indeed a constant null space. >> >> Chris >> ________________________________________ >> From: Klaij, Christiaan >> Sent: Friday, March 24, 2017 9:05 AM >> To: Lawrence Mitchell; Matthew Knepley >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] left and right preconditioning with a constant null space >> >> Lawrence, >> >> I think you mean "-fieldsplit_1_mat_null_space_test"? This >> doesn't return any info, should it? Anyway, I've added a "call >> MatNullSpaceTest" to the code which returns "true" for the null >> space of A11. >> >> I also tried to run with "-fieldsplit_1_ksp_constant_null_space" >> so that the null space is only attached to S (and not to >> A11). Unfortunately, the behaviour is still the same: convergence >> in the preconditioned norm only. >> >> Chris >> ________________________________________ >> From: Lawrence Mitchell > >> Sent: Thursday, March 23, 2017 4:52 PM >> To: Klaij, Christiaan; Matthew Knepley >> Cc: petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] left and right preconditioning with a constant null space >> >> On 23/03/17 15:37, Klaij, Christiaan wrote: >>> Yes, that's clearer, thanks! I do have is0 and is1 so I can try >>> PetscObjectCompose and let you know. >>> >>> Note though that the viewer reports that both S and A11 have a >>> null space attached... My matrix is a matnest and I've attached a >>> null space to A11, so the latter works as expected. But is the viewer >>> wrong for S? >> >> No, I think this is a consequence of using a matnest and attaching a >> nullspace to A11. In that case you sort of "can" set a nullspace on >> the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because >> you just get a reference. But if you switched to AIJ then you would >> no longer get this. >> >> So it happens that the nullspace you set on A11 /is/ transferred over >> to S, but this is luck, rather than design. >> >> So maybe there is something else wrong. Perhaps you can run with >> -fieldsplit_1_ksp_test_null_space to check the nullspace matches >> correctly? >> >> Lawrence >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image47e52a.PNG Type: image/png Size: 293 bytes Desc: image47e52a.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image50d565.PNG Type: image/png Size: 331 bytes Desc: image50d565.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageb068ec.PNG Type: image/png Size: 333 bytes Desc: imageb068ec.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image16844b.PNG Type: image/png Size: 253 bytes Desc: image16844b.PNG URL: From knepley at gmail.com Tue Mar 28 08:58:25 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 28 Mar 2017 08:58:25 -0500 Subject: [petsc-users] Fw: left and right preconditioning with a constant null space In-Reply-To: <1490709283484.12325@marin.nl> References: <1490194734290.31169@marin.nl> <1490198682595.54991@marin.nl> <1490258567983.93394@marin.nl> <30a7326d-91df-4822-fddc-e71c3e1417ec@imperial.ac.uk> <1490283421032.72098@marin.nl> <5fabe67b-2375-1bf7-ee28-2e3424b31c1a@imperial.ac.uk> <1490342727455.65990@marin.nl> <1490358872715.1@marin.nl> <1490368299379.90828@marin.nl> <2BE9B65A-4F9A-4F4F-9764-79EA73BA767D@mcs.anl.gov> <1490599399080.49416@marin.nl> <3CBF06AA-D835-4906-A780-476823CF55B8@mcs.anl.gov> <1490692259748.29949@marin.nl> <1490707147172.1897@marin.nl> <1490709283484.12325@marin.nl> Message-ID: On Tue, Mar 28, 2017 at 8:54 AM, Klaij, Christiaan wrote: > Matt, > > Yes, null space vector attached to the large matrix and > consistent rhs. This seems to be what Barry wants (or I > misunderstood his previous email) > > a) that was Lawrence's suggestion as well, using > petscObjectCompose, but that doesn't seem to work in fortran as I > reported earlier. > This is just because there is a string there and we are lazy. I will do it as soon as I can. > b) Good to know, but so far I don't have a DM. > > c) same problem as a) > > I understand your last point about pulling apart the global null > vector. Then again how would a user know the null space of any > submatrices that arise somewhere within PCFieldSplit? > Because the user is specifying the split. I agree that if FS were deciding how to split on its own, this is not possible, but that is why it can't. It has to be told. Thanks, Matt > Chris > > > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development > MARIN | T +31 317 49 33 44 <+31%20317%20493%20344> | C.Klaij at marin.nl | > www.marin.nl > > [image: LinkedIn] [image: > YouTube] [image: Twitter] > [image: Facebook] > > MARIN news: Meet us again at the OTC 2017 > > > ------------------------------ > *From:* Matthew Knepley > *Sent:* Tuesday, March 28, 2017 3:27 PM > *To:* Klaij, Christiaan > *Cc:* Lawrence Mitchell; petsc-users at mcs.anl.gov > *Subject:* Re: Fw: [petsc-users] left and right preconditioning with a > constant null space > > On Tue, Mar 28, 2017 at 8:19 AM, Klaij, Christiaan > wrote: > >> Barry, >> >> That seems by far the best way to proceed! As a user I'm >> responsible for the velocity-pressure matrix and its null space, >> all the rest is up to PCFieldSplit. But unfortunately it doesn't >> work: >> >> I've constructed the null space [u,p]=[0,1], attached it to the >> velocity-pressure matrix and verified it by MatNullSpaceTest. I'm >> making sure the rhs is consistent with "MatNullSpaceRemove". >> >> However, the null space doesn't seem to propagate to the Schur >> complement, which therefore doesn't converge, see >> attachment "out1". >> >> When I attach the constant null space directly to A11, it does >> reach the Schur complement and I do get convergence, see >> attachment "out2". >> > > So you attach a null space vector to the large matrix, and have a > consistent rhs? > This is not quite what we want. If you > > a) Had a consistent rhs and attached the constant nullspace vector to > the pressure IS, then things will work > > b) Had a consistent rhs and attached the constant nullspace vector to > the "field" object from a DM, it should work > > c) Attached the global nullspace vector to A^T and the constant > nullspace to the pressure IS, it should work > > We can't really pull apart the global null vector because there is no > guarantee that its the nullspace of the submatrix. > > Thanks, > > Matt > > >> Chris >> >> ________________________________________ >> From: Barry Smith >> Sent: Monday, March 27, 2017 6:35 PM >> To: Klaij, Christiaan >> Cc: Lawrence Mitchell; Matthew Knepley; petsc-users at mcs.anl.gov >> Subject: Re: [petsc-users] left and right preconditioning with a constant >> null space >> >> > On Mar 27, 2017, at 2:23 AM, Klaij, Christiaan >> wrote: >> > >> > Barry, >> > >> > I removed the null space from the rhs in the debug program that I >> > wrote to just solve Sp x = b once. In this debug program I've >> > constructed Sp myself after reading in the four blocks from the >> > real program. So this is independent of PCFieldSplit. Indeed I >> > also see bad convergence when using pc_type svd for this debug >> > program unless I remove the null space from the rhs. >> > >> > So far I haven't managed to translate any of this to the real >> > program. >> > >> > - Setting the null space for Sp in the real program seems to work >> > by happy accident, but Lawrence gave me the hint to >> > use "PetscObjectCompose" to set the nullspace using is1. >> > >> > - I still have to understand Lawrence's hint and Matt's comment >> > about MatSetTransposeNullSpace. >> > >> > - I'm not sure how to remove the null space from the rhs vector >> > in the real pogram, since I have one rhs vector with both >> > velocity and pressure and the null space only refers to the >> > pressure part. Any hints? >> > >> > - Or should I set the null space for the velocity-pressure matrix >> > itself, instead of the Schur complement? >> >> I would first check if the entire full velocity-pressure right hand >> side is consistent. If it is not you can make it consistent by removing the >> transpose null space. You can use MatCreateNullSpace() to create the null >> space by passing in a vector that is constant on all the pressure variables >> and 0 on the velocity variables. >> >> Barry >> >> > >> > - Besides this, I'm also wondering why the rhs would be >> > inconsistent in the first place, it's hard to understand from >> > the discretization. >> > >> > Thanks for your reply, >> > Chris >> > >> > >> > >> > dr. ir. Christiaan Klaij | Senior Researcher | Research & Development >> > MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | >> http://www.marin.nl >> > >> > MARIN news: http://www.marin.nl/web/News/N >> ews-items/Comfort-and-Safety-at-Sea-March-29-Rotterdam.htm >> > >> > ________________________________________ >> > From: Barry Smith >> > Sent: Saturday, March 25, 2017 1:29 AM >> > To: Klaij, Christiaan >> > Cc: Lawrence Mitchell; Matthew Knepley; petsc-users at mcs.anl.gov >> > Subject: Re: [petsc-users] left and right preconditioning with a >> constant null space >> > >> >> On Mar 24, 2017, at 10:11 AM, Klaij, Christiaan >> wrote: >> >> >> >> I've written a small PETSc program that loads the four blocks, >> >> constructs Sp, attaches the null space and solves with a random >> >> rhs vector. >> >> >> >> This small program replicates the same behaviour as the real >> >> code: convergence in the preconditioned norm, stagnation in the >> >> unpreconditioned norm. >> >> >> >> But when I add a call to remove the null space from the rhs >> >> vector ("MatNullSpaceRemove"), >> > >> > Are you removing the null space from the original full right hand >> side or inside the solver for the Schur complement problem? >> > >> > Note that if instead of using PCFIELDSPLIT you use some other simpler >> PC you should also see bad convergence, do you? Even if you use -pc_type >> svd you should see bad convergence? >> > >> > >> > >> >> I do get convergence in both >> >> norms! Clearly, the real code must somehow produce an >> >> inconsistent rhs vector. So the problem is indeed somewhere else >> >> and not in PCFieldSplit. >> >> >> >> Chris >> >> >> >> >> >> >> >> dr. ir. Christiaan Klaij | Senior Researcher | Research & Development >> >> MARIN | T +31 317 49 33 44 | mailto:C.Klaij at marin.nl | >> http://www.marin.nl >> >> >> >> MARIN news: http://www.marin.nl/web/News/N >> ews-items/Meet-us-again-at-the-OTC-2017.htm >> >> >> >> ________________________________________ >> >> From: Klaij, Christiaan >> >> Sent: Friday, March 24, 2017 1:34 PM >> >> To: Lawrence Mitchell; Matthew Knepley >> >> Cc: petsc-users at mcs.anl.gov >> >> Subject: Re: [petsc-users] left and right preconditioning with a >> constant null space >> >> >> >> I've also loaded the four blocks into matlab, computed >> >> >> >> Sp = A11 - A10 inv(diag(A00)) A01 >> >> >> >> and confirmed that Sp has indeed a constant null space. >> >> >> >> Chris >> >> ________________________________________ >> >> From: Klaij, Christiaan >> >> Sent: Friday, March 24, 2017 9:05 AM >> >> To: Lawrence Mitchell; Matthew Knepley >> >> Cc: petsc-users at mcs.anl.gov >> >> Subject: Re: [petsc-users] left and right preconditioning with a >> constant null space >> >> >> >> Lawrence, >> >> >> >> I think you mean "-fieldsplit_1_mat_null_space_test"? This >> >> doesn't return any info, should it? Anyway, I've added a "call >> >> MatNullSpaceTest" to the code which returns "true" for the null >> >> space of A11. >> >> >> >> I also tried to run with "-fieldsplit_1_ksp_constant_null_space" >> >> so that the null space is only attached to S (and not to >> >> A11). Unfortunately, the behaviour is still the same: convergence >> >> in the preconditioned norm only. >> >> >> >> Chris >> >> ________________________________________ >> >> From: Lawrence Mitchell >> >> Sent: Thursday, March 23, 2017 4:52 PM >> >> To: Klaij, Christiaan; Matthew Knepley >> >> Cc: petsc-users at mcs.anl.gov >> >> Subject: Re: [petsc-users] left and right preconditioning with a >> constant null space >> >> >> >> On 23/03/17 15:37, Klaij, Christiaan wrote: >> >>> Yes, that's clearer, thanks! I do have is0 and is1 so I can try >> >>> PetscObjectCompose and let you know. >> >>> >> >>> Note though that the viewer reports that both S and A11 have a >> >>> null space attached... My matrix is a matnest and I've attached a >> >>> null space to A11, so the latter works as expected. But is the viewer >> >>> wrong for S? >> >> >> >> No, I think this is a consequence of using a matnest and attaching a >> >> nullspace to A11. In that case you sort of "can" set a nullspace on >> >> the submatrix returned in MatCreateSubMatrix(Amat, is1, is1), because >> >> you just get a reference. But if you switched to AIJ then you would >> >> no longer get this. >> >> >> >> So it happens that the nullspace you set on A11 /is/ transferred over >> >> to S, but this is luck, rather than design. >> >> >> >> So maybe there is something else wrong. Perhaps you can run with >> >> -fieldsplit_1_ksp_test_null_space to check the nullspace matches >> >> correctly? >> >> >> >> Lawrence >> >> >> > >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image16844b.PNG Type: image/png Size: 253 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image47e52a.PNG Type: image/png Size: 293 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image50d565.PNG Type: image/png Size: 331 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: imageb068ec.PNG Type: image/png Size: 333 bytes Desc: not available URL: From hng.email at gmail.com Tue Mar 28 12:42:06 2017 From: hng.email at gmail.com (Hom Nath Gharti) Date: Tue, 28 Mar 2017 13:42:06 -0400 Subject: [petsc-users] CMake and PETSc In-Reply-To: <87lgrqrtm0.fsf@jedbrown.org> References: <87lgrw23zw.fsf@jedbrown.org> <87lgrqrtm0.fsf@jedbrown.org> Message-ID: Thanks Jed. I currently fix this problem defining the PETSc path as the environment variable. I will try to use find_pacakge later. Hom On Mon, Mar 27, 2017 at 11:02 PM, Jed Brown wrote: > Hom Nath Gharti writes: > >> Thanks, Jed! I will try. I see that FindPETSc.cmake has following lines: >> >> set(PETSC_VALID_COMPONENTS >> C >> CXX) >> >> Should we add FC or similar? > > You could, but then you'd have to also add test code for that language > binding. (All this does is a test for whether the library works when > called from that language.) From aherrema at iastate.edu Tue Mar 28 14:28:05 2017 From: aherrema at iastate.edu (Austin Herrema) Date: Tue, 28 Mar 2017 14:28:05 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: Gaetan, Thank you for this. With your help, I think I am getting close to getting this to work for my case. At the moment, I am hung up on the line of your makefile which reads "$(FF90) $(FF90_ALL_FLAGS) -I$(MAIN_DIR)/mod -c warpustruct-f2pywrappers2.f90". Am I correct that warpustruct-f2pywrappers2.f90 should be generated by f2py? If so, do you (or does anyone else) know the command for telling f2py to do so? At the moment I am using: f2py run_analysis.f90 -m run_analysis -h run_analysis.pyf to get the requisite .pyf and .c files, but no .f90 file. If I am wrong about the origin of this file, please do tell me! Thank you, Austin On Mon, Mar 27, 2017 at 5:13 PM, Gaetan Kenway wrote: > Austin > > Here is the full makefile for a code we use. The variables defined > externally in a separate config file are: > $(FF90) > $(FF90_FLAGS) > $(LIBDIR) > $(PETSC_LINKER_FLAGS) > $(LINKER_FLAGS) > $(CGNS_LINKER_FLAGS) > > $(PYTHON) > $(PYTHON-CONIFG) > $(F2PY) > (These are usually use python, python-config and f2py. You can overwrite > as necessary) > > $(CC) > $(CC_ALL_FLAGS) > > This essentially just mimics what f2py does automatically but we found it > easier to control exactly what is going on. Essentially you are just > compiling exactly as you normally an executable, but instead make a .so > (with the -shared option) and including the additional .o files generated > by compiling the f2py-generated wrappers. > > Hope this helps, > Gaetan > > On Sat, Mar 25, 2017 at 5:38 AM, Lisandro Dalcin > wrote: > >> >> >> On 22 March 2017 at 20:29, Barry Smith wrote: >> >>> >>> Lisandro, >>> >>> We've had a couple questions similar to this with f2py; is there a >>> way we could add to the PETSc/SLEPc makefile rules something to allow >>> people to trivially use f2py without having to make their own (often >>> incorrect) manual command lines? >>> >>> Thanks >>> >>> >> Barry, it is quite hard and hacky to get f2py working in the general >> case. I think the email from Gaetan in this thread proves my point. >> >> IMHO, it is easier to write a small Fortran source exposing the API to >> call using ISO_C_BINDINGS, then wrap that code with the more traditional >> C-based "static" tools (SWIG, Cython) or even "dynamically" with ctypes or >> cffi (which use dlopen'ing). >> >> >> >> -- >> Lisandro Dalcin >> ============ >> Research Scientist >> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) >> Extreme Computing Research Center (ECRC) >> King Abdullah University of Science and Technology (KAUST) >> http://ecrc.kaust.edu.sa/ >> >> 4700 King Abdullah University of Science and Technology >> al-Khawarizmi Bldg (Bldg 1), Office # 0109 >> Thuwal 23955-6900, Kingdom of Saudi Arabia >> http://www.kaust.edu.sa >> >> Office Phone: +966 12 808-0459 <+966%2012%20808%200459> >> > > -- *Austin Herrema* PhD Student | Graduate Research Assistant | Iowa State University Wind Energy Science, Engineering, and Policy | Mechanical Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetank at gmail.com Tue Mar 28 14:31:40 2017 From: gaetank at gmail.com (Gaetan Kenway) Date: Tue, 28 Mar 2017 12:31:40 -0700 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: You only get that file if you have wrapped a module explicitly in the .pyf file. If you haven't wrapped a module, that doesn't get created. Gaetan On Tue, Mar 28, 2017 at 12:28 PM, Austin Herrema wrote: > Gaetan, > > Thank you for this. With your help, I think I am getting close to getting > this to work for my case. At the moment, I am hung up on the line of your > makefile which reads "$(FF90) $(FF90_ALL_FLAGS) -I$(MAIN_DIR)/mod -c > warpustruct-f2pywrappers2.f90". Am I correct that > warpustruct-f2pywrappers2.f90 should be generated by f2py? If so, do you > (or does anyone else) know the command for telling f2py to do so? At the > moment I am using: > > f2py run_analysis.f90 -m run_analysis -h run_analysis.pyf > > to get the requisite .pyf and .c files, but no .f90 file. If I am wrong > about the origin of this file, please do tell me! > > Thank you, > Austin > > On Mon, Mar 27, 2017 at 5:13 PM, Gaetan Kenway wrote: > >> Austin >> >> Here is the full makefile for a code we use. The variables defined >> externally in a separate config file are: >> $(FF90) >> $(FF90_FLAGS) >> $(LIBDIR) >> $(PETSC_LINKER_FLAGS) >> $(LINKER_FLAGS) >> $(CGNS_LINKER_FLAGS) >> >> $(PYTHON) >> $(PYTHON-CONIFG) >> $(F2PY) >> (These are usually use python, python-config and f2py. You can overwrite >> as necessary) >> >> $(CC) >> $(CC_ALL_FLAGS) >> >> This essentially just mimics what f2py does automatically but we found it >> easier to control exactly what is going on. Essentially you are just >> compiling exactly as you normally an executable, but instead make a .so >> (with the -shared option) and including the additional .o files generated >> by compiling the f2py-generated wrappers. >> >> Hope this helps, >> Gaetan >> >> On Sat, Mar 25, 2017 at 5:38 AM, Lisandro Dalcin >> wrote: >> >>> >>> >>> On 22 March 2017 at 20:29, Barry Smith wrote: >>> >>>> >>>> Lisandro, >>>> >>>> We've had a couple questions similar to this with f2py; is there a >>>> way we could add to the PETSc/SLEPc makefile rules something to allow >>>> people to trivially use f2py without having to make their own (often >>>> incorrect) manual command lines? >>>> >>>> Thanks >>>> >>>> >>> Barry, it is quite hard and hacky to get f2py working in the general >>> case. I think the email from Gaetan in this thread proves my point. >>> >>> IMHO, it is easier to write a small Fortran source exposing the API to >>> call using ISO_C_BINDINGS, then wrap that code with the more traditional >>> C-based "static" tools (SWIG, Cython) or even "dynamically" with ctypes or >>> cffi (which use dlopen'ing). >>> >>> >>> >>> -- >>> Lisandro Dalcin >>> ============ >>> Research Scientist >>> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) >>> Extreme Computing Research Center (ECRC) >>> King Abdullah University of Science and Technology (KAUST) >>> http://ecrc.kaust.edu.sa/ >>> >>> 4700 King Abdullah University of Science and Technology >>> al-Khawarizmi Bldg (Bldg 1), Office # 0109 >>> Thuwal 23955-6900, Kingdom of Saudi Arabia >>> http://www.kaust.edu.sa >>> >>> Office Phone: +966 12 808-0459 <+966%2012%20808%200459> >>> >> >> > > > -- > *Austin Herrema* > PhD Student | Graduate Research Assistant | Iowa State University > Wind Energy Science, Engineering, and Policy | Mechanical Engineering > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpgwars at wp.pl Tue Mar 28 14:33:50 2017 From: rpgwars at wp.pl (=?ISO-8859-2?Q?=A3ukasz_Kasza?=) Date: Tue, 28 Mar 2017 21:33:50 +0200 Subject: [petsc-users] Odp: Re: Doubts regarding MatGetSubMatrices Message-ID: <58daba9ee90756.70791908@wp.pl> Yes but it will have reduced number of rows. Thank you Barry Smith it works very well :) Dnia Wtorek, 28 Marca 2017 01:56 Barry Smith napisa?(a) > > Use ISCreateStride() to indicate all the columns; MatGetSubMatrices_MPIAIJ handles this as a special case and never generates the integer list of all columns. > > Note that the resulting matrix will have the same number of columns as the original matrix. What do you want to do with this matrix? > > Barry > > So make the start of the stride 0, the step 1 and the length be the same as the number of columns in the original matrix. > > > > On Mar 27, 2017, at 4:21 PM, ?ukasz Kasza wrote: > > > > > > > > > > Dear PETSC users, > > > > Lets say that I want to use MatGetSubMatrices(Mat mat,PetscInt n,const IS irow[],const IS icol[],MatReuse scall,Mat *submat[]) and I want to get every column of the specified rows. However initially I dont know which column indexes to pass in icol, I just know that I need everything. > > > > My question is, how to implement this efficiently in parallel aij format? I could for instance pass a range of indexes from 0 to the size of the matrix, but my concern is that this way the communication cost will increase for large matrices as the request will be sent for all columns for every row in irow. Other solution would be to exchange between the processes info regarding indexes of nonzero columns and then call MatGetSubmatrices with indexes in icol only of nonzero columns. > > > > Any help much appreciated, > > Best Regards > > > > From davydden at gmail.com Tue Mar 28 15:12:45 2017 From: davydden at gmail.com (Denis Davydov) Date: Tue, 28 Mar 2017 23:12:45 +0300 Subject: [petsc-users] [3.7.5] strange config error on macOS with XCode 8.3 and Clang 8.1.0 Message-ID: <668CE3D2-B464-4AE2-82F6-F87F88D2A53B@gmail.com> Dear all, Yesterday I updated to the latest XCode and now have problems configuring PETSc (see below). I must say that a number of other packages which need MPI fortran wrappers compiled fine. Regards, Denis. ========================== Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -c -o /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers/conftest.o -I/var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers/conftest.F Successful compile: Source: program main end Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -c -o /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers/conftest.o -I/var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers/conftest.F Successful compile: Source: program main end Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -o /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers/conftest -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers/conftest.o -lto_library -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -lLTO -Wl,-rpath,/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.1.0-apple/hwloc-1.11.6-pkbyijayr66g3wq3hojj3l44qc7kjno3/lib -L/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.1.0-apple/hwloc-1.11.6-pkbyijayr66g3wq3hojj3l44qc7kjno3/lib -Wl,-rpath,/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/lib -L/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/lib -ldl -lmpi -lSystem -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib/darwin -lclang_rt.osx -ldl Possible ERROR while running linker: exit code 256 stderr: ld: can't map file, errno=22 file '/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib' for architecture x86_64 collect2: error: ld returned 1 exit status Popping language FC compilers: Error message from compiling {Cannot compile/link FC with /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang-8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90.} **** Configure header /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc-eLXjKy/confdefs.h **** #if !defined(INCLUDED_UNKNOWN) #define INCLUDED_UNKNOWN #ifndef PETSC_HAVE_DLFCN_H #define PETSC_HAVE_DLFCN_H 1 #endif #ifndef PETSC_HAVE_RTLD_NOW #define PETSC_HAVE_RTLD_NOW 1 #endif #ifndef PETSC_HAVE_RTLD_LOCAL #define PETSC_HAVE_RTLD_LOCAL 1 #endif #ifndef PETSC_HAVE_RTLD_LAZY #define PETSC_HAVE_RTLD_LAZY 1 #endif #ifndef PETSC_C_STATIC_INLINE #define PETSC_C_STATIC_INLINE static inline #endif #ifndef PETSC_HAVE_RTLD_GLOBAL #define PETSC_HAVE_RTLD_GLOBAL 1 #endif #ifndef PETSC_C_RESTRICT #define PETSC_C_RESTRICT restrict #endif #ifndef PETSC_HAVE_LIBDL #define PETSC_HAVE_LIBDL 1 #endif #ifndef PETSC_ARCH #define PETSC_ARCH "arch-darwin-c-opt" #endif #ifndef PETSC_CLANGUAGE_C #define PETSC_CLANGUAGE_C 1 #endif #ifndef PETSC_HAVE_DYNAMIC_LIBRARIES #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 #endif #ifndef PETSC_HAVE_SHARED_LIBRARIES #define PETSC_HAVE_SHARED_LIBRARIES 1 #endif #ifndef PETSC_USE_SHARED_LIBRARIES #define PETSC_USE_SHARED_LIBRARIES 1 #endif #ifndef PETSC_USE_ERRORCHECKING #define PETSC_USE_ERRORCHECKING 1 #endif #endif **** C specific Configure header /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc-eLXjKy/conffix.h **** #if !defined(INCLUDED_UNKNOWN) #define INCLUDED_UNKNOWN #if defined(__cplusplus) extern "C" { } #else #endif #endif ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- C libraries cannot directly be used from Fortran ******************************************************************************* File "./config/configure.py", line 405, in petsc_configure framework.configure(out = sys.stdout) File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfspkqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", line 1090, in configure self.processChildren() File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfspkqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", line 1079, in processChildren self.serialEvaluation(self.childGraph) File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfspkqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", line 1060, in serialEvaluation child.configure() File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfspkqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/compilers.py", line 1438, in configure self.executeTest(self.checkCLibraries) File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfspkqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/base.py", line 126, in executeTest ret = test(*args,**kargs) File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfspkqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/compilers.py", line 313, in checkCLibraries raise RuntimeError('C libraries cannot directly be used from Fortran') ================================================================================ Finishing Configure Run at Tue Mar 28 21:56:48 2017 ================================================================================ From knepley at gmail.com Tue Mar 28 15:18:05 2017 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 28 Mar 2017 15:18:05 -0500 Subject: [petsc-users] [3.7.5] strange config error on macOS with XCode 8.3 and Clang 8.1.0 In-Reply-To: <668CE3D2-B464-4AE2-82F6-F87F88D2A53B@gmail.com> References: <668CE3D2-B464-4AE2-82F6-F87F88D2A53B@gmail.com> Message-ID: On Tue, Mar 28, 2017 at 3:12 PM, Denis Davydov wrote: > Dear all, > > Yesterday I updated to the latest XCode and now have problems configuring > PETSc (see below). > I must say that a number of other packages which need MPI fortran wrappers > compiled fine. > This looks like bad parsing of the junk spit out by the C compiler: -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Tool chains/XcodeDefault.xctoolchain/usr/lib -L/Applications/Xcode.app/Cont ents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -lLTO you can just turn that off --with-clib-autodetect=0 --with-fortranlib-autodetect=0 --with-cxxlib- autodetect=0 but then you are responsible for putting any compiler libraries in LIBS that we needed to make Fortran and C work together. Thanks, Matt > Regards, > Denis. > > ========================== > > > Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- > 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -c > -o /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- > eLXjKy/config.setCompilers/conftest.o -I/var/folders/5k/sqpp24tx3yld > s4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers -Wall > -ffree-line-length-0 -Wno-unused-dummy-argument -g -O > /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- > eLXjKy/config.setCompilers/conftest.F > Successful compile: > Source: > program main > > end > Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- > 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -c > -o /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- > eLXjKy/config.setCompilers/conftest.o -I/var/folders/5k/sqpp24tx3yld > s4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers -Wall > -ffree-line-length-0 -Wno-unused-dummy-argument -g -O > /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- > eLXjKy/config.setCompilers/conftest.F > Successful compile: > Source: > program main > > end > Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- > 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -o > /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- > eLXjKy/config.setCompilers/conftest -Wl,-multiply_defined,suppress > -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs > -Wl,-search_paths_first -Wl,-no_compact_unwind -Wall -ffree-line-length-0 > -Wno-unused-dummy-argument -g -O /var/folders/5k/sqpp24tx3ylds4 > fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers/conftest.o > -lto_library -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolch > ains/XcodeDefault.xctoolchain/usr/lib -L/Applications/Xcode.app/Cont > ents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -lLTO > -Wl,-rpath,/Users/davydden/spack/opt/spack/darwin-sierra-x86 > _64/clang-8.1.0-apple/hwloc-1.11.6-pkbyijayr66g3wq3hojj3l44qc7kjno3/lib > -L/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang > -8.1.0-apple/hwloc-1.11.6-pkbyijayr66g3wq3hojj3l44qc7kjno3/lib > -Wl,-rpath,/Users/davydden/spack/opt/spack/darwin-sierra-x86 > _64/clang-8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/lib > -L/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang > -8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/lib -ldl > -lmpi -lSystem -Wl,-rpath,/Applications/Xcode > .app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/ > usr/bin/../lib/clang/8.1.0/lib/darwin -L/Applications/Xcode.app/Cont > ents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib/darwin > -lclang_rt.osx -ldl > Possible ERROR while running linker: exit code 256 > stderr: > ld: can't map file, errno=22 file '/Applications/Xcode.app/Conte > nts/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib' for > architecture x86_64 > collect2: error: ld returned 1 exit status > Popping language FC > compilers: Error message from compiling {Cannot compile/link > FC with /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- > 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90.} > **** Configure header /var/folders/5k/sqpp24tx3ylds4 > fgm13pfht00000gn/T/petsc-eLXjKy/confdefs.h **** > #if !defined(INCLUDED_UNKNOWN) > #define INCLUDED_UNKNOWN > > #ifndef PETSC_HAVE_DLFCN_H > #define PETSC_HAVE_DLFCN_H 1 > #endif > > #ifndef PETSC_HAVE_RTLD_NOW > #define PETSC_HAVE_RTLD_NOW 1 > #endif > > #ifndef PETSC_HAVE_RTLD_LOCAL > #define PETSC_HAVE_RTLD_LOCAL 1 > #endif > > #ifndef PETSC_HAVE_RTLD_LAZY > #define PETSC_HAVE_RTLD_LAZY 1 > #endif > > #ifndef PETSC_C_STATIC_INLINE > #define PETSC_C_STATIC_INLINE static inline > #endif > > #ifndef PETSC_HAVE_RTLD_GLOBAL > #define PETSC_HAVE_RTLD_GLOBAL 1 > #endif > > #ifndef PETSC_C_RESTRICT > #define PETSC_C_RESTRICT restrict > #endif > > #ifndef PETSC_HAVE_LIBDL > #define PETSC_HAVE_LIBDL 1 > #endif > > #ifndef PETSC_ARCH > #define PETSC_ARCH "arch-darwin-c-opt" > #endif > > #ifndef PETSC_CLANGUAGE_C > #define PETSC_CLANGUAGE_C 1 > #endif > > #ifndef PETSC_HAVE_DYNAMIC_LIBRARIES > #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 > #endif > > #ifndef PETSC_HAVE_SHARED_LIBRARIES > #define PETSC_HAVE_SHARED_LIBRARIES 1 > #endif > > #ifndef PETSC_USE_SHARED_LIBRARIES > #define PETSC_USE_SHARED_LIBRARIES 1 > #endif > > #ifndef PETSC_USE_ERRORCHECKING > #define PETSC_USE_ERRORCHECKING 1 > #endif > > #endif > **** C specific Configure header /var/folders/5k/sqpp24tx3ylds4 > fgm13pfht00000gn/T/petsc-eLXjKy/conffix.h **** > #if !defined(INCLUDED_UNKNOWN) > #define INCLUDED_UNKNOWN > > #if defined(__cplusplus) > extern "C" { > } > #else > #endif > #endif > ************************************************************ > ******************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------ > ------------------- > C libraries cannot directly be used from Fortran > ************************************************************ > ******************* > File "./config/configure.py", line 405, in petsc_configure > framework.configure(out = sys.stdout) > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", > line 1090, in configure > self.processChildren() > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", > line 1079, in processChildren > self.serialEvaluation(self.childGraph) > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", > line 1060, in serialEvaluation > child.configure() > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/compilers.py", > line 1438, in configure > self.executeTest(self.checkCLibraries) > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/base.py", > line 126, in executeTest > ret = test(*args,**kargs) > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/compilers.py", > line 313, in checkCLibraries > raise RuntimeError('C libraries cannot directly be used from Fortran') > ============================================================ > ==================== > Finishing Configure Run at Tue Mar 28 21:56:48 2017 > ============================================================ > ==================== -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 28 15:41:21 2017 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 28 Mar 2017 15:41:21 -0500 Subject: [petsc-users] [3.7.5] strange config error on macOS with XCode 8.3 and Clang 8.1.0 In-Reply-To: References: <668CE3D2-B464-4AE2-82F6-F87F88D2A53B@gmail.com> Message-ID: It would be good to have the complete configure.log to see where this path is coming from. Satish On Tue, 28 Mar 2017, Matthew Knepley wrote: > On Tue, Mar 28, 2017 at 3:12 PM, Denis Davydov wrote: > > > Dear all, > > > > Yesterday I updated to the latest XCode and now have problems configuring > > PETSc (see below). > > I must say that a number of other packages which need MPI fortran wrappers > > compiled fine. > > > > This looks like bad parsing of the junk spit out by the C compiler: > > -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Tool > chains/XcodeDefault.xctoolchain/usr/lib -L/Applications/Xcode.app/Cont > ents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -lLTO > > you can just turn that off > > --with-clib-autodetect=0 --with-fortranlib-autodetect=0 --with-cxxlib- > autodetect=0 > > but then you are responsible for putting any compiler libraries in LIBS > that we needed to make Fortran and C work together. > > Thanks, > > Matt > > > > Regards, > > Denis. > > > > ========================== > > > > > > Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- > > 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -c > > -o /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- > > eLXjKy/config.setCompilers/conftest.o -I/var/folders/5k/sqpp24tx3yld > > s4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers -Wall > > -ffree-line-length-0 -Wno-unused-dummy-argument -g -O > > /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- > > eLXjKy/config.setCompilers/conftest.F > > Successful compile: > > Source: > > program main > > > > end > > Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- > > 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -c > > -o /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- > > eLXjKy/config.setCompilers/conftest.o -I/var/folders/5k/sqpp24tx3yld > > s4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers -Wall > > -ffree-line-length-0 -Wno-unused-dummy-argument -g -O > > /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- > > eLXjKy/config.setCompilers/conftest.F > > Successful compile: > > Source: > > program main > > > > end > > Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- > > 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -o > > /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- > > eLXjKy/config.setCompilers/conftest -Wl,-multiply_defined,suppress > > -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs > > -Wl,-search_paths_first -Wl,-no_compact_unwind -Wall -ffree-line-length-0 > > -Wno-unused-dummy-argument -g -O /var/folders/5k/sqpp24tx3ylds4 > > fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers/conftest.o > > -lto_library -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolch > > ains/XcodeDefault.xctoolchain/usr/lib -L/Applications/Xcode.app/Cont > > ents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -lLTO > > -Wl,-rpath,/Users/davydden/spack/opt/spack/darwin-sierra-x86 > > _64/clang-8.1.0-apple/hwloc-1.11.6-pkbyijayr66g3wq3hojj3l44qc7kjno3/lib > > -L/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang > > -8.1.0-apple/hwloc-1.11.6-pkbyijayr66g3wq3hojj3l44qc7kjno3/lib > > -Wl,-rpath,/Users/davydden/spack/opt/spack/darwin-sierra-x86 > > _64/clang-8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/lib > > -L/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang > > -8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/lib -ldl > > -lmpi -lSystem -Wl,-rpath,/Applications/Xcode > > .app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/ > > usr/bin/../lib/clang/8.1.0/lib/darwin -L/Applications/Xcode.app/Cont > > ents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib/darwin > > -lclang_rt.osx -ldl > > Possible ERROR while running linker: exit code 256 > > stderr: > > ld: can't map file, errno=22 file '/Applications/Xcode.app/Conte > > nts/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib' for > > architecture x86_64 > > collect2: error: ld returned 1 exit status > > Popping language FC > > compilers: Error message from compiling {Cannot compile/link > > FC with /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- > > 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90.} > > **** Configure header /var/folders/5k/sqpp24tx3ylds4 > > fgm13pfht00000gn/T/petsc-eLXjKy/confdefs.h **** > > #if !defined(INCLUDED_UNKNOWN) > > #define INCLUDED_UNKNOWN > > > > #ifndef PETSC_HAVE_DLFCN_H > > #define PETSC_HAVE_DLFCN_H 1 > > #endif > > > > #ifndef PETSC_HAVE_RTLD_NOW > > #define PETSC_HAVE_RTLD_NOW 1 > > #endif > > > > #ifndef PETSC_HAVE_RTLD_LOCAL > > #define PETSC_HAVE_RTLD_LOCAL 1 > > #endif > > > > #ifndef PETSC_HAVE_RTLD_LAZY > > #define PETSC_HAVE_RTLD_LAZY 1 > > #endif > > > > #ifndef PETSC_C_STATIC_INLINE > > #define PETSC_C_STATIC_INLINE static inline > > #endif > > > > #ifndef PETSC_HAVE_RTLD_GLOBAL > > #define PETSC_HAVE_RTLD_GLOBAL 1 > > #endif > > > > #ifndef PETSC_C_RESTRICT > > #define PETSC_C_RESTRICT restrict > > #endif > > > > #ifndef PETSC_HAVE_LIBDL > > #define PETSC_HAVE_LIBDL 1 > > #endif > > > > #ifndef PETSC_ARCH > > #define PETSC_ARCH "arch-darwin-c-opt" > > #endif > > > > #ifndef PETSC_CLANGUAGE_C > > #define PETSC_CLANGUAGE_C 1 > > #endif > > > > #ifndef PETSC_HAVE_DYNAMIC_LIBRARIES > > #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 > > #endif > > > > #ifndef PETSC_HAVE_SHARED_LIBRARIES > > #define PETSC_HAVE_SHARED_LIBRARIES 1 > > #endif > > > > #ifndef PETSC_USE_SHARED_LIBRARIES > > #define PETSC_USE_SHARED_LIBRARIES 1 > > #endif > > > > #ifndef PETSC_USE_ERRORCHECKING > > #define PETSC_USE_ERRORCHECKING 1 > > #endif > > > > #endif > > **** C specific Configure header /var/folders/5k/sqpp24tx3ylds4 > > fgm13pfht00000gn/T/petsc-eLXjKy/conffix.h **** > > #if !defined(INCLUDED_UNKNOWN) > > #define INCLUDED_UNKNOWN > > > > #if defined(__cplusplus) > > extern "C" { > > } > > #else > > #endif > > #endif > > ************************************************************ > > ******************* > > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > > details): > > ------------------------------------------------------------ > > ------------------- > > C libraries cannot directly be used from Fortran > > ************************************************************ > > ******************* > > File "./config/configure.py", line 405, in petsc_configure > > framework.configure(out = sys.stdout) > > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", > > line 1090, in configure > > self.processChildren() > > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", > > line 1079, in processChildren > > self.serialEvaluation(self.childGraph) > > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", > > line 1060, in serialEvaluation > > child.configure() > > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/compilers.py", > > line 1438, in configure > > self.executeTest(self.checkCLibraries) > > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/base.py", > > line 126, in executeTest > > ret = test(*args,**kargs) > > File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp > > kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/compilers.py", > > line 313, in checkCLibraries > > raise RuntimeError('C libraries cannot directly be used from Fortran') > > ============================================================ > > ==================== > > Finishing Configure Run at Tue Mar 28 21:56:48 2017 > > ============================================================ > > ==================== > > > > > From bsmith at mcs.anl.gov Tue Mar 28 15:46:41 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 28 Mar 2017 15:46:41 -0500 Subject: [petsc-users] [3.7.5] strange config error on macOS with XCode 8.3 and Clang 8.1.0 In-Reply-To: References: <668CE3D2-B464-4AE2-82F6-F87F88D2A53B@gmail.com> Message-ID: <8C936ED9-0928-4C24-B7D3-331573808718@mcs.anl.gov> I'm updating Xcode now and will try to reproduce the issue. > On Mar 28, 2017, at 3:41 PM, Satish Balay wrote: > > It would be good to have the complete configure.log to see where this > path is coming from. > > Satish > > On Tue, 28 Mar 2017, Matthew Knepley wrote: > >> On Tue, Mar 28, 2017 at 3:12 PM, Denis Davydov wrote: >> >>> Dear all, >>> >>> Yesterday I updated to the latest XCode and now have problems configuring >>> PETSc (see below). >>> I must say that a number of other packages which need MPI fortran wrappers >>> compiled fine. >>> >> >> This looks like bad parsing of the junk spit out by the C compiler: >> >> -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Tool >> chains/XcodeDefault.xctoolchain/usr/lib -L/Applications/Xcode.app/Cont >> ents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -lLTO >> >> you can just turn that off >> >> --with-clib-autodetect=0 --with-fortranlib-autodetect=0 --with-cxxlib- >> autodetect=0 >> >> but then you are responsible for putting any compiler libraries in LIBS >> that we needed to make Fortran and C work together. >> >> Thanks, >> >> Matt >> >> >>> Regards, >>> Denis. >>> >>> ========================== >>> >>> >>> Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- >>> 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -c >>> -o /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- >>> eLXjKy/config.setCompilers/conftest.o -I/var/folders/5k/sqpp24tx3yld >>> s4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers -Wall >>> -ffree-line-length-0 -Wno-unused-dummy-argument -g -O >>> /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- >>> eLXjKy/config.setCompilers/conftest.F >>> Successful compile: >>> Source: >>> program main >>> >>> end >>> Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- >>> 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -c >>> -o /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- >>> eLXjKy/config.setCompilers/conftest.o -I/var/folders/5k/sqpp24tx3yld >>> s4fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers -Wall >>> -ffree-line-length-0 -Wno-unused-dummy-argument -g -O >>> /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- >>> eLXjKy/config.setCompilers/conftest.F >>> Successful compile: >>> Source: >>> program main >>> >>> end >>> Executing: /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- >>> 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90 -o >>> /var/folders/5k/sqpp24tx3ylds4fgm13pfht00000gn/T/petsc- >>> eLXjKy/config.setCompilers/conftest -Wl,-multiply_defined,suppress >>> -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs >>> -Wl,-search_paths_first -Wl,-no_compact_unwind -Wall -ffree-line-length-0 >>> -Wno-unused-dummy-argument -g -O /var/folders/5k/sqpp24tx3ylds4 >>> fgm13pfht00000gn/T/petsc-eLXjKy/config.setCompilers/conftest.o >>> -lto_library -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolch >>> ains/XcodeDefault.xctoolchain/usr/lib -L/Applications/Xcode.app/Cont >>> ents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -lLTO >>> -Wl,-rpath,/Users/davydden/spack/opt/spack/darwin-sierra-x86 >>> _64/clang-8.1.0-apple/hwloc-1.11.6-pkbyijayr66g3wq3hojj3l44qc7kjno3/lib >>> -L/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang >>> -8.1.0-apple/hwloc-1.11.6-pkbyijayr66g3wq3hojj3l44qc7kjno3/lib >>> -Wl,-rpath,/Users/davydden/spack/opt/spack/darwin-sierra-x86 >>> _64/clang-8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/lib >>> -L/Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang >>> -8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/lib -ldl >>> -lmpi -lSystem -Wl,-rpath,/Applications/Xcode >>> .app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/ >>> usr/bin/../lib/clang/8.1.0/lib/darwin -L/Applications/Xcode.app/Cont >>> ents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib/darwin >>> -lclang_rt.osx -ldl >>> Possible ERROR while running linker: exit code 256 >>> stderr: >>> ld: can't map file, errno=22 file '/Applications/Xcode.app/Conte >>> nts/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib' for >>> architecture x86_64 >>> collect2: error: ld returned 1 exit status >>> Popping language FC >>> compilers: Error message from compiling {Cannot compile/link >>> FC with /Users/davydden/spack/opt/spack/darwin-sierra-x86_64/clang- >>> 8.1.0-apple/openmpi-2.1.0-rh7brts6lzesj46zopjj5rzmkcyiktx7/bin/mpif90.} >>> **** Configure header /var/folders/5k/sqpp24tx3ylds4 >>> fgm13pfht00000gn/T/petsc-eLXjKy/confdefs.h **** >>> #if !defined(INCLUDED_UNKNOWN) >>> #define INCLUDED_UNKNOWN >>> >>> #ifndef PETSC_HAVE_DLFCN_H >>> #define PETSC_HAVE_DLFCN_H 1 >>> #endif >>> >>> #ifndef PETSC_HAVE_RTLD_NOW >>> #define PETSC_HAVE_RTLD_NOW 1 >>> #endif >>> >>> #ifndef PETSC_HAVE_RTLD_LOCAL >>> #define PETSC_HAVE_RTLD_LOCAL 1 >>> #endif >>> >>> #ifndef PETSC_HAVE_RTLD_LAZY >>> #define PETSC_HAVE_RTLD_LAZY 1 >>> #endif >>> >>> #ifndef PETSC_C_STATIC_INLINE >>> #define PETSC_C_STATIC_INLINE static inline >>> #endif >>> >>> #ifndef PETSC_HAVE_RTLD_GLOBAL >>> #define PETSC_HAVE_RTLD_GLOBAL 1 >>> #endif >>> >>> #ifndef PETSC_C_RESTRICT >>> #define PETSC_C_RESTRICT restrict >>> #endif >>> >>> #ifndef PETSC_HAVE_LIBDL >>> #define PETSC_HAVE_LIBDL 1 >>> #endif >>> >>> #ifndef PETSC_ARCH >>> #define PETSC_ARCH "arch-darwin-c-opt" >>> #endif >>> >>> #ifndef PETSC_CLANGUAGE_C >>> #define PETSC_CLANGUAGE_C 1 >>> #endif >>> >>> #ifndef PETSC_HAVE_DYNAMIC_LIBRARIES >>> #define PETSC_HAVE_DYNAMIC_LIBRARIES 1 >>> #endif >>> >>> #ifndef PETSC_HAVE_SHARED_LIBRARIES >>> #define PETSC_HAVE_SHARED_LIBRARIES 1 >>> #endif >>> >>> #ifndef PETSC_USE_SHARED_LIBRARIES >>> #define PETSC_USE_SHARED_LIBRARIES 1 >>> #endif >>> >>> #ifndef PETSC_USE_ERRORCHECKING >>> #define PETSC_USE_ERRORCHECKING 1 >>> #endif >>> >>> #endif >>> **** C specific Configure header /var/folders/5k/sqpp24tx3ylds4 >>> fgm13pfht00000gn/T/petsc-eLXjKy/conffix.h **** >>> #if !defined(INCLUDED_UNKNOWN) >>> #define INCLUDED_UNKNOWN >>> >>> #if defined(__cplusplus) >>> extern "C" { >>> } >>> #else >>> #endif >>> #endif >>> ************************************************************ >>> ******************* >>> UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for >>> details): >>> ------------------------------------------------------------ >>> ------------------- >>> C libraries cannot directly be used from Fortran >>> ************************************************************ >>> ******************* >>> File "./config/configure.py", line 405, in petsc_configure >>> framework.configure(out = sys.stdout) >>> File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp >>> kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", >>> line 1090, in configure >>> self.processChildren() >>> File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp >>> kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", >>> line 1079, in processChildren >>> self.serialEvaluation(self.childGraph) >>> File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp >>> kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/framework.py", >>> line 1060, in serialEvaluation >>> child.configure() >>> File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp >>> kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/compilers.py", >>> line 1438, in configure >>> self.executeTest(self.checkCLibraries) >>> File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp >>> kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/base.py", >>> line 126, in executeTest >>> ret = test(*args,**kargs) >>> File "/Users/davydden/spack/var/spack/stage/petsc-3.7.5-tpsz2lfsp >>> kqa7tq2keqowsk4xrebsfwi/petsc-3.7.5/config/BuildSystem/config/compilers.py", >>> line 313, in checkCLibraries >>> raise RuntimeError('C libraries cannot directly be used from Fortran') >>> ============================================================ >>> ==================== >>> Finishing Configure Run at Tue Mar 28 21:56:48 2017 >>> ============================================================ >>> ==================== >> >> >> >> >> > From davydden at gmail.com Tue Mar 28 16:31:01 2017 From: davydden at gmail.com (Denis Davydov) Date: Wed, 29 Mar 2017 00:31:01 +0300 Subject: [petsc-users] [3.7.5] strange config error on macOS with XCode 8.3 and Clang 8.1.0 In-Reply-To: References: <668CE3D2-B464-4AE2-82F6-F87F88D2A53B@gmail.com> Message-ID: <5BCD5BA6-D6A5-49CD-B199-1E90AC557935@gmail.com> @Satish please find the log attached: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log.zip Type: application/zip Size: 83900 bytes Desc: not available URL: -------------- next part -------------- @Matt >> but then you are responsible for putting any compiler libraries in LIBS >> that we needed to make Fortran and C work together. hopefully we can find another solution to this issue. @Bary Thanks for trying to reproduce the issue, if you will be trying Spack, be aware that you need to do: ncurses: version: [6.0] paths: ncurses at 6.0: /usr buildable: False openblas: version: [develop] python: version: [2.7.10] paths: python at 2.7.10: /usr buildable: False Regards, Denis From aherrema at iastate.edu Tue Mar 28 16:38:23 2017 From: aherrema at iastate.edu (Austin Herrema) Date: Tue, 28 Mar 2017 16:38:23 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: Gotcha. In that case, it seems I should be good without that line. I've gotten the compile to succeed, but upon attempting to import the module I get the following: >>> import run_analysis_final Traceback (most recent call last): File "", line 1, in ImportError: dlopen(./run_analysis_final.so, 2): Symbol not found: _run_analysis_ Referenced from: ./run_analysis_final.so Expected in: flat namespace in ./run_analysis_final.so Seems I may have gotten the linking wrong somehow. Will keep searching, but the simplified makefile that I used is attached in case anyone thinks they might be able to spot the issue in it. That said, I do realize that this may be starting to reach beyond the scope of this mailing list so feel free to ignore... On Tue, Mar 28, 2017 at 2:31 PM, Gaetan Kenway wrote: > You only get that file if you have wrapped a module explicitly in the .pyf > file. If you haven't wrapped a module, that doesn't get created. > > Gaetan > > On Tue, Mar 28, 2017 at 12:28 PM, Austin Herrema > wrote: > >> Gaetan, >> >> Thank you for this. With your help, I think I am getting close to getting >> this to work for my case. At the moment, I am hung up on the line of your >> makefile which reads "$(FF90) $(FF90_ALL_FLAGS) -I$(MAIN_DIR)/mod -c >> warpustruct-f2pywrappers2.f90". Am I correct that >> warpustruct-f2pywrappers2.f90 should be generated by f2py? If so, do you >> (or does anyone else) know the command for telling f2py to do so? At the >> moment I am using: >> >> f2py run_analysis.f90 -m run_analysis -h run_analysis.pyf >> >> to get the requisite .pyf and .c files, but no .f90 file. If I am wrong >> about the origin of this file, please do tell me! >> >> Thank you, >> Austin >> >> On Mon, Mar 27, 2017 at 5:13 PM, Gaetan Kenway wrote: >> >>> Austin >>> >>> Here is the full makefile for a code we use. The variables defined >>> externally in a separate config file are: >>> $(FF90) >>> $(FF90_FLAGS) >>> $(LIBDIR) >>> $(PETSC_LINKER_FLAGS) >>> $(LINKER_FLAGS) >>> $(CGNS_LINKER_FLAGS) >>> >>> $(PYTHON) >>> $(PYTHON-CONIFG) >>> $(F2PY) >>> (These are usually use python, python-config and f2py. You can overwrite >>> as necessary) >>> >>> $(CC) >>> $(CC_ALL_FLAGS) >>> >>> This essentially just mimics what f2py does automatically but we found >>> it easier to control exactly what is going on. Essentially you are just >>> compiling exactly as you normally an executable, but instead make a .so >>> (with the -shared option) and including the additional .o files generated >>> by compiling the f2py-generated wrappers. >>> >>> Hope this helps, >>> Gaetan >>> >>> On Sat, Mar 25, 2017 at 5:38 AM, Lisandro Dalcin >>> wrote: >>> >>>> >>>> >>>> On 22 March 2017 at 20:29, Barry Smith wrote: >>>> >>>>> >>>>> Lisandro, >>>>> >>>>> We've had a couple questions similar to this with f2py; is there a >>>>> way we could add to the PETSc/SLEPc makefile rules something to allow >>>>> people to trivially use f2py without having to make their own (often >>>>> incorrect) manual command lines? >>>>> >>>>> Thanks >>>>> >>>>> >>>> Barry, it is quite hard and hacky to get f2py working in the general >>>> case. I think the email from Gaetan in this thread proves my point. >>>> >>>> IMHO, it is easier to write a small Fortran source exposing the API to >>>> call using ISO_C_BINDINGS, then wrap that code with the more traditional >>>> C-based "static" tools (SWIG, Cython) or even "dynamically" with ctypes or >>>> cffi (which use dlopen'ing). >>>> >>>> >>>> >>>> -- >>>> Lisandro Dalcin >>>> ============ >>>> Research Scientist >>>> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) >>>> Extreme Computing Research Center (ECRC) >>>> King Abdullah University of Science and Technology (KAUST) >>>> http://ecrc.kaust.edu.sa/ >>>> >>>> 4700 King Abdullah University of Science and Technology >>>> al-Khawarizmi Bldg (Bldg 1), Office # 0109 >>>> Thuwal 23955-6900, Kingdom of Saudi Arabia >>>> http://www.kaust.edu.sa >>>> >>>> Office Phone: +966 12 808-0459 <+966%2012%20808%200459> >>>> >>> >>> >> >> >> -- >> *Austin Herrema* >> PhD Student | Graduate Research Assistant | Iowa State University >> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >> > > -- *Austin Herrema* PhD Student | Graduate Research Assistant | Iowa State University Wind Energy Science, Engineering, and Policy | Mechanical Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: makefile Type: application/octet-stream Size: 2007 bytes Desc: not available URL: From gaetank at gmail.com Tue Mar 28 16:53:16 2017 From: gaetank at gmail.com (Gaetan Kenway) Date: Tue, 28 Mar 2017 14:53:16 -0700 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: Looks like it isn't finding your source from run_analysis.f90. You still need to compile that yourself and include in the final link. In my example, all the "original" source code was precompiled into a library from a different makefile and then this was run after-the-fact. Gaetan On Tue, Mar 28, 2017 at 2:38 PM, Austin Herrema wrote: > Gotcha. In that case, it seems I should be good without that line. I've > gotten the compile to succeed, but upon attempting to import the module I > get the following: > > >>> import run_analysis_final > Traceback (most recent call last): > File "", line 1, in > ImportError: dlopen(./run_analysis_final.so, 2): Symbol not found: > _run_analysis_ > Referenced from: ./run_analysis_final.so > Expected in: flat namespace > in ./run_analysis_final.so > > Seems I may have gotten the linking wrong somehow. Will keep searching, > but the simplified makefile that I used is attached in case anyone thinks > they might be able to spot the issue in it. That said, I do realize that > this may be starting to reach beyond the scope of this mailing list so feel > free to ignore... > > On Tue, Mar 28, 2017 at 2:31 PM, Gaetan Kenway wrote: > >> You only get that file if you have wrapped a module explicitly in the >> .pyf file. If you haven't wrapped a module, that doesn't get created. >> >> Gaetan >> >> On Tue, Mar 28, 2017 at 12:28 PM, Austin Herrema >> wrote: >> >>> Gaetan, >>> >>> Thank you for this. With your help, I think I am getting close to >>> getting this to work for my case. At the moment, I am hung up on the line >>> of your makefile which reads "$(FF90) $(FF90_ALL_FLAGS) -I$(MAIN_DIR)/mod >>> -c warpustruct-f2pywrappers2.f90". Am I correct that >>> warpustruct-f2pywrappers2.f90 should be generated by f2py? If so, do you >>> (or does anyone else) know the command for telling f2py to do so? At the >>> moment I am using: >>> >>> f2py run_analysis.f90 -m run_analysis -h run_analysis.pyf >>> >>> to get the requisite .pyf and .c files, but no .f90 file. If I am wrong >>> about the origin of this file, please do tell me! >>> >>> Thank you, >>> Austin >>> >>> On Mon, Mar 27, 2017 at 5:13 PM, Gaetan Kenway >>> wrote: >>> >>>> Austin >>>> >>>> Here is the full makefile for a code we use. The variables defined >>>> externally in a separate config file are: >>>> $(FF90) >>>> $(FF90_FLAGS) >>>> $(LIBDIR) >>>> $(PETSC_LINKER_FLAGS) >>>> $(LINKER_FLAGS) >>>> $(CGNS_LINKER_FLAGS) >>>> >>>> $(PYTHON) >>>> $(PYTHON-CONIFG) >>>> $(F2PY) >>>> (These are usually use python, python-config and f2py. You can >>>> overwrite as necessary) >>>> >>>> $(CC) >>>> $(CC_ALL_FLAGS) >>>> >>>> This essentially just mimics what f2py does automatically but we found >>>> it easier to control exactly what is going on. Essentially you are just >>>> compiling exactly as you normally an executable, but instead make a .so >>>> (with the -shared option) and including the additional .o files generated >>>> by compiling the f2py-generated wrappers. >>>> >>>> Hope this helps, >>>> Gaetan >>>> >>>> On Sat, Mar 25, 2017 at 5:38 AM, Lisandro Dalcin >>>> wrote: >>>> >>>>> >>>>> >>>>> On 22 March 2017 at 20:29, Barry Smith wrote: >>>>> >>>>>> >>>>>> Lisandro, >>>>>> >>>>>> We've had a couple questions similar to this with f2py; is there >>>>>> a way we could add to the PETSc/SLEPc makefile rules something to allow >>>>>> people to trivially use f2py without having to make their own (often >>>>>> incorrect) manual command lines? >>>>>> >>>>>> Thanks >>>>>> >>>>>> >>>>> Barry, it is quite hard and hacky to get f2py working in the general >>>>> case. I think the email from Gaetan in this thread proves my point. >>>>> >>>>> IMHO, it is easier to write a small Fortran source exposing the API to >>>>> call using ISO_C_BINDINGS, then wrap that code with the more traditional >>>>> C-based "static" tools (SWIG, Cython) or even "dynamically" with ctypes or >>>>> cffi (which use dlopen'ing). >>>>> >>>>> >>>>> >>>>> -- >>>>> Lisandro Dalcin >>>>> ============ >>>>> Research Scientist >>>>> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) >>>>> Extreme Computing Research Center (ECRC) >>>>> King Abdullah University of Science and Technology (KAUST) >>>>> http://ecrc.kaust.edu.sa/ >>>>> >>>>> 4700 King Abdullah University of Science and Technology >>>>> al-Khawarizmi Bldg (Bldg 1), Office # 0109 >>>>> Thuwal 23955-6900, Kingdom of Saudi Arabia >>>>> http://www.kaust.edu.sa >>>>> >>>>> Office Phone: +966 12 808-0459 <+966%2012%20808%200459> >>>>> >>>> >>>> >>> >>> >>> -- >>> *Austin Herrema* >>> PhD Student | Graduate Research Assistant | Iowa State University >>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>> >> >> > > > -- > *Austin Herrema* > PhD Student | Graduate Research Assistant | Iowa State University > Wind Energy Science, Engineering, and Policy | Mechanical Engineering > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 28 18:15:07 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 28 Mar 2017 18:15:07 -0500 Subject: [petsc-users] [3.7.5] strange config error on macOS with XCode 8.3 and Clang 8.1.0 In-Reply-To: <5BCD5BA6-D6A5-49CD-B199-1E90AC557935@gmail.com> References: <668CE3D2-B464-4AE2-82F6-F87F88D2A53B@gmail.com> <5BCD5BA6-D6A5-49CD-B199-1E90AC557935@gmail.com> Message-ID: I can reproduce the problem. Cannot understand it. Executing: gfortran -o /var/folders/c1/ldz_dt8n2r3dtwv_chp5pfr40000gn/T/petsc-YTTzYM/config.setCompilers/conftest -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-no_compact_unwind -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g /var/folders/c1/ldz_dt8n2r3dtwv_chp5pfr40000gn/T/petsc-YTTzYM/config.setCompilers/conftest.o -lto_library -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -lLTO -ldl -lSystem -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib/darwin -lclang_rt.osx -ldl Possible ERROR while running linker: exit code 256 stderr: ld: can't map file, errno=22 file '/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib' for architecture x86_64 collect2: error: ld returned 1 exit status If I remove the one -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib then it links ok. If I run with -v to see the actual ld line it is /usr/bin/ld -dynamic -macosx_version_min 10.12.4 -weak_reference_mismatches non-weak -o conftest -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib/darwin -L/usr/local/Cellar/gcc/6.3.0_1/lib/gcc/6/gcc/x86_64-apple-darwin16.3.0/6.3.0 -L/usr/local/Cellar/gcc/6.3.0_1/lib/gcc/6/gcc/x86_64-apple-darwin16.3.0/6.3.0/../../.. -multiply_defined suppress -multiply_defined suppress -commons use_dylibs -search_paths_first -no_compact_unwind program.o -lto_library -rpath /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain -lSystem -rpath /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib -lgfortran -no_compact_unwind -lSystem -lgcc_ext.10.5 -lgcc -lquadmath -lm -lgcc_ext.10.5 -lgcc -lSystem -v and does not work with the same error but if I remove the -rpath /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain it works. Now I flip the position of the two paths /usr/bin/ld -dynamic -macosx_version_min 10.12.4 -weak_reference_mismatches non-weak -o conftest -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib/darwin -L/usr/local/Cellar/gcc/6.3.0_1/lib/gcc/6/gcc/x86_64-apple-darwin16.3.0/6.3.0 -L/usr/local/Cellar/gcc/6.3.0_1/lib/gcc/6/gcc/x86_64-apple-darwin16.3.0/6.3.0/../../.. -multiply_defined suppress -multiply_defined suppress -commons use_dylibs -search_paths_first -no_compact_unwind program.o -lto_library -rpath /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib -lSystem -rpath /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain -lgfortran -no_compact_unwind -lSystem -lgcc_ext.10.5 -lgcc -lquadmath -lm -lgcc_ext.10.5 -lgcc -lSystem -v and it now complains about the other directory ! ld: can't map file, errno=22 file '/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/8.1.0/lib' for inferred architecture x86_64 If I list three -rpath it complains about the first (just like with two) It's like ld is wonky > On Mar 28, 2017, at 4:31 PM, Denis Davydov wrote: > > @Satish > > please find the log attached: > > > @Matt > > >> but then you are responsible for putting any compiler libraries in LIBS > >> that we needed to make Fortran and C work together. > > hopefully we can find another solution to this issue. > > > @Bary > > Thanks for trying to reproduce the issue, if you will be trying Spack, be aware that you need to do: > > ncurses: > version: [6.0] > paths: > ncurses at 6.0: /usr > buildable: False > openblas: > version: [develop] > python: > version: [2.7.10] > paths: > python at 2.7.10: /usr > buildable: False > > > Regards, > Denis > > From bsmith at mcs.anl.gov Tue Mar 28 22:23:35 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 28 Mar 2017 22:23:35 -0500 Subject: [petsc-users] [3.7.5] strange config error on macOS with XCode 8.3 and Clang 8.1.0 In-Reply-To: <5BCD5BA6-D6A5-49CD-B199-1E90AC557935@gmail.com> References: <668CE3D2-B464-4AE2-82F6-F87F88D2A53B@gmail.com> <5BCD5BA6-D6A5-49CD-B199-1E90AC557935@gmail.com> Message-ID: <7C9859FD-FAA6-4038-974F-507BE295992C@mcs.anl.gov> I have added the commit https://bitbucket.org/petsc/petsc/commits/4f290403fdd060d09d5cb07345cbfd52670e3cbc to the maint, master and next branch that allows ./configure to to go through in this situation. If the change does not break other tests it will be included in the next patch release. Thanks for reporting the problem, Barry This patch does not directly deal with the problem (which i don't understand but seems to be an Apple Xcode problem) but works around the problem on my machine. > On Mar 28, 2017, at 4:31 PM, Denis Davydov wrote: > > @Satish > > please find the log attached: > > > @Matt > > >> but then you are responsible for putting any compiler libraries in LIBS > >> that we needed to make Fortran and C work together. > > hopefully we can find another solution to this issue. > > > @Bary > > Thanks for trying to reproduce the issue, if you will be trying Spack, be aware that you need to do: > > ncurses: > version: [6.0] > paths: > ncurses at 6.0: /usr > buildable: False > openblas: > version: [develop] > python: > version: [2.7.10] > paths: > python at 2.7.10: /usr > buildable: False > > > Regards, > Denis > > From davydden at gmail.com Wed Mar 29 00:26:34 2017 From: davydden at gmail.com (Denis Davydov) Date: Wed, 29 Mar 2017 08:26:34 +0300 Subject: [petsc-users] [3.7.5] strange config error on macOS with XCode 8.3 and Clang 8.1.0 In-Reply-To: <7C9859FD-FAA6-4038-974F-507BE295992C@mcs.anl.gov> References: <668CE3D2-B464-4AE2-82F6-F87F88D2A53B@gmail.com> <5BCD5BA6-D6A5-49CD-B199-1E90AC557935@gmail.com> <7C9859FD-FAA6-4038-974F-507BE295992C@mcs.anl.gov> Message-ID: <7651BACB-F833-400A-85AF-D990FF4FC024@gmail.com> Thanks Barry, I can confirm that an adaptation of your patch to 3.7.5 allows to compile PETSc. Regads, Denis. > On 29 Mar 2017, at 06:23, Barry Smith wrote: > > > I have added the commit https://bitbucket.org/petsc/petsc/commits/4f290403fdd060d09d5cb07345cbfd52670e3cbc to the maint, master and next branch that allows ./configure to to go through in this situation. If the change does not break other tests it will be included in the next patch release. > > Thanks for reporting the problem, > > Barry > > This patch does not directly deal with the problem (which i don't understand but seems to be an Apple Xcode problem) but works around the problem on my machine. From toon.weyens at gmail.com Wed Mar 29 02:08:12 2017 From: toon.weyens at gmail.com (Toon Weyens) Date: Wed, 29 Mar 2017 07:08:12 +0000 Subject: [petsc-users] Slepc JD and GD converge to wrong eigenpair Message-ID: I started looking for alternatives from the standard Krylov-Schur method to solve the generalized eigenvalue problem Ax = kBx in my code. These matrices have a block-band structure (typically 5, 7 or 9 blocks wide, with block sizes of the order 20) of size typically 1000 blocks. This eigenvalue problem results from the minimization of the energy of a perturbed plasma-vacuum system in order to investigate its stability. So far, I've not taken advantage of the Hermiticity of the problem. For "easier" problems, especially the Generalized Davidson method converges like lightning, sometimes up to 100 times faster than Krylov-Schur. However, for slightly more complicated problems, GD converges to the wrong eigenpair: There is certainly an eigenpair with an eigenvalue lower than 0 (i.e. unstable), but the solver never gets below some small, positive value, to which it wrongly converges. Is it possible to improve this behavior? I tried changing the preconditioner, but it did not work. Might it be possible to use Krylov-Schur until reaching some precision, and then switching to JD to quickly converge? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Mar 29 02:54:33 2017 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 29 Mar 2017 09:54:33 +0200 Subject: [petsc-users] Slepc JD and GD converge to wrong eigenpair In-Reply-To: References: Message-ID: <65A0A5E7-399B-4D19-A967-73765A96DB98@dsic.upv.es> > El 29 mar 2017, a las 9:08, Toon Weyens escribi?: > > I started looking for alternatives from the standard Krylov-Schur method to solve the generalized eigenvalue problem Ax = kBx in my code. These matrices have a block-band structure (typically 5, 7 or 9 blocks wide, with block sizes of the order 20) of size typically 1000 blocks. This eigenvalue problem results from the minimization of the energy of a perturbed plasma-vacuum system in order to investigate its stability. So far, I've not taken advantage of the Hermiticity of the problem. > > For "easier" problems, especially the Generalized Davidson method converges like lightning, sometimes up to 100 times faster than Krylov-Schur. > > However, for slightly more complicated problems, GD converges to the wrong eigenpair: There is certainly an eigenpair with an eigenvalue lower than 0 (i.e. unstable), but the solver never gets below some small, positive value, to which it wrongly converges. I would need to know the settings you are using. Are you doing smallest_real? Maybe you can try target_magnitude with harmonic extraction. > > Is it possible to improve this behavior? I tried changing the preconditioner, but it did not work. > > Might it be possible to use Krylov-Schur until reaching some precision, and then switching to JD to quickly converge? Yes, you can do this, using EPSSetInitialSpace() in the second solve. But, depending on the settings, this may not buy you much. Jose > > Thanks! From toon.weyens at gmail.com Wed Mar 29 06:58:16 2017 From: toon.weyens at gmail.com (Toon Weyens) Date: Wed, 29 Mar 2017 11:58:16 +0000 Subject: [petsc-users] Slepc JD and GD converge to wrong eigenpair In-Reply-To: <65A0A5E7-399B-4D19-A967-73765A96DB98@dsic.upv.es> References: <65A0A5E7-399B-4D19-A967-73765A96DB98@dsic.upv.es> Message-ID: Dear Jose, Thanks for the answer. I am looking for the smallest real, indeed. I have, just now, accidentally figured out that I can get correct convergence by increasing NCV to higher values, so that's covered! I thought I had checked this before, but apparently not. It's converging well now, and rather fast (still about 8 times faster than Krylov-Schur). The issue now is that it scales rather badly: If I use 2 or more MPI processes, the time required to solve it goes up drastically. A small test case, on my Ubuntu 16.04 laptop, takes 10 seconds (blazing fast) for 1 MPI process, 25 for 2, 33 for 3, 59 for 4, etc... It is a machine with 8 cores, so i don't really understand why this is. Are there other methods that can actually maintain the time required to solve for multiple MPI process? Or, preferable, decrease it (why else would I use multiple processes if not for memory restrictions)? I will never have to do something bigger than a generalized non-Hermitian ev problem of, let's say, 5000 blocks of 200x200 complex values per block, and a band size of about 11 blocks wide (so a few GB per matrix max). Thanks so much! On Wed, Mar 29, 2017 at 9:54 AM Jose E. Roman wrote: > > > El 29 mar 2017, a las 9:08, Toon Weyens > escribi?: > > > > I started looking for alternatives from the standard Krylov-Schur method > to solve the generalized eigenvalue problem Ax = kBx in my code. These > matrices have a block-band structure (typically 5, 7 or 9 blocks wide, with > block sizes of the order 20) of size typically 1000 blocks. This eigenvalue > problem results from the minimization of the energy of a perturbed > plasma-vacuum system in order to investigate its stability. So far, I've > not taken advantage of the Hermiticity of the problem. > > > > For "easier" problems, especially the Generalized Davidson method > converges like lightning, sometimes up to 100 times faster than > Krylov-Schur. > > > > However, for slightly more complicated problems, GD converges to the > wrong eigenpair: There is certainly an eigenpair with an eigenvalue lower > than 0 (i.e. unstable), but the solver never gets below some small, > positive value, to which it wrongly converges. > > I would need to know the settings you are using. Are you doing > smallest_real? Maybe you can try target_magnitude with harmonic extraction. > > > > > Is it possible to improve this behavior? I tried changing the > preconditioner, but it did not work. > > > > Might it be possible to use Krylov-Schur until reaching some precision, > and then switching to JD to quickly converge? > > Yes, you can do this, using EPSSetInitialSpace() in the second solve. But, > depending on the settings, this may not buy you much. > > Jose > > > > > Thanks! > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 29 08:20:09 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 29 Mar 2017 08:20:09 -0500 Subject: [petsc-users] Slepc JD and GD converge to wrong eigenpair In-Reply-To: References: <65A0A5E7-399B-4D19-A967-73765A96DB98@dsic.upv.es> Message-ID: On Wed, Mar 29, 2017 at 6:58 AM, Toon Weyens wrote: > Dear Jose, > > Thanks for the answer. I am looking for the smallest real, indeed. > > I have, just now, accidentally figured out that I can get correct > convergence by increasing NCV to higher values, so that's covered! I > thought I had checked this before, but apparently not. It's converging well > now, and rather fast (still about 8 times faster than Krylov-Schur). > > The issue now is that it scales rather badly: If I use 2 or more MPI > processes, the time required to solve it goes up drastically. A small test > case, on my Ubuntu 16.04 laptop, takes 10 seconds (blazing fast) for 1 MPI > process, 25 for 2, 33 for 3, 59 for 4, etc... It is a machine with 8 cores, > so i don't really understand why this is. > For any scalability question, we need to see the output of -log_view -ksp_view -ksp_monitor_true_residual -ksp_converged_reason and other EPS options which I forget unfortunately. What seems likely here is that you are using a PC which is not scalable, so iteration would be going up. Thanks, Matt > Are there other methods that can actually maintain the time required to > solve for multiple MPI process? Or, preferable, decrease it (why else would > I use multiple processes if not for memory restrictions)? > > I will never have to do something bigger than a generalized non-Hermitian > ev problem of, let's say, 5000 blocks of 200x200 complex values per block, > and a band size of about 11 blocks wide (so a few GB per matrix max). > > Thanks so much! > > On Wed, Mar 29, 2017 at 9:54 AM Jose E. Roman wrote: > >> >> > El 29 mar 2017, a las 9:08, Toon Weyens >> escribi?: >> > >> > I started looking for alternatives from the standard Krylov-Schur >> method to solve the generalized eigenvalue problem Ax = kBx in my code. >> These matrices have a block-band structure (typically 5, 7 or 9 blocks >> wide, with block sizes of the order 20) of size typically 1000 blocks. This >> eigenvalue problem results from the minimization of the energy of a >> perturbed plasma-vacuum system in order to investigate its stability. So >> far, I've not taken advantage of the Hermiticity of the problem. >> > >> > For "easier" problems, especially the Generalized Davidson method >> converges like lightning, sometimes up to 100 times faster than >> Krylov-Schur. >> > >> > However, for slightly more complicated problems, GD converges to the >> wrong eigenpair: There is certainly an eigenpair with an eigenvalue lower >> than 0 (i.e. unstable), but the solver never gets below some small, >> positive value, to which it wrongly converges. >> >> I would need to know the settings you are using. Are you doing >> smallest_real? Maybe you can try target_magnitude with harmonic extraction. >> >> > >> > Is it possible to improve this behavior? I tried changing the >> preconditioner, but it did not work. >> > >> > Might it be possible to use Krylov-Schur until reaching some precision, >> and then switching to JD to quickly converge? >> >> Yes, you can do this, using EPSSetInitialSpace() in the second solve. >> But, depending on the settings, this may not buy you much. >> >> Jose >> >> > >> > Thanks! >> >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From aherrema at iastate.edu Wed Mar 29 11:27:10 2017 From: aherrema at iastate.edu (Austin Herrema) Date: Wed, 29 Mar 2017 11:27:10 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: Got it--just had to link against other compiled source, as you said. I've attached my makefile for doing everything (including variable definitions, compiling source, and running requisite f2py commands) in case that's helpful for anyone else trying to do something similar. But obviously the meat of it is in what Gaetan provided. I am now able to successfully run simple PETSc-based fortran codes in python. For a larger, more complex code, I am getting some PETSc errors when running in python that I don't normally get. In particular, preallocation is failing--the relevant fortran code block and PETSc error is below. call MatCreate(PETSC_COMM_WORLD, LHS_pc, pc_ier) call MatSetSizes(LHS_pc, PETSC_DECIDE, PETSC_DECIDE, NSD*FUN%NNODE, NSD*FUN%NNODE, pc_ier) call MatSetFromOptions(LHS_pc, pc_ier) call MatSeqAIJSetPreallocation(LHS_pc, 500, PETSC_NULL_INTEGER, pc_ier) [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Argument out of range [0]PETSC ERROR: nnz cannot be greater than row length: local row 2 value 1330400321 rowlength 37065 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 [0]PETSC ERROR: Unknown Name on a real named austin-ethernet.student.iastate.edu by Austin Wed Mar 29 10:59:33 2017 [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 --with-openmp=0 --with-debugging=1 --with-ssl=0 --with-superlu_dist-include=/usr/local/opt/superlu_dist/include --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib -lsuperlu_dist" --with-fftw-dir=/usr/local/opt/fftw --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 --prefix=/usr/local/Cellar/petsc/3.7.5/real --with-scalar-type=real --with-hypre-dir=/usr/local/opt/hypre --with-sundials-dir=/usr/local/opt/sundials --with-hwloc-dir=/usr/local/opt/hwloc [0]PETSC ERROR: #1 MatSeqAIJSetPreallocation_SeqAIJ() line 3598 in /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/impls/aij/seq/aij.c [0]PETSC ERROR: #2 MatSeqAIJSetPreallocation() line 3570 in /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/impls/aij/seq/aij.c Is there anything about the MatSeqAIJSetPreallocation function that would make it not work correctly in Python even though everything else seems to work properly? If anyone has thoughts on this that would be great. But, again, I do realize I'm venturing into potentially unsupported territory. On Tue, Mar 28, 2017 at 4:53 PM, Gaetan Kenway wrote: > Looks like it isn't finding your source from run_analysis.f90. You still > need to compile that yourself and include in the final link. In my example, > all the "original" source code was precompiled into a library from a > different makefile and then this was run after-the-fact. > > Gaetan > > On Tue, Mar 28, 2017 at 2:38 PM, Austin Herrema > wrote: > >> Gotcha. In that case, it seems I should be good without that line. I've >> gotten the compile to succeed, but upon attempting to import the module I >> get the following: >> >> >>> import run_analysis_final >> Traceback (most recent call last): >> File "", line 1, in >> ImportError: dlopen(./run_analysis_final.so, 2): Symbol not found: >> _run_analysis_ >> Referenced from: ./run_analysis_final.so >> Expected in: flat namespace >> in ./run_analysis_final.so >> >> Seems I may have gotten the linking wrong somehow. Will keep searching, >> but the simplified makefile that I used is attached in case anyone thinks >> they might be able to spot the issue in it. That said, I do realize that >> this may be starting to reach beyond the scope of this mailing list so feel >> free to ignore... >> >> On Tue, Mar 28, 2017 at 2:31 PM, Gaetan Kenway wrote: >> >>> You only get that file if you have wrapped a module explicitly in the >>> .pyf file. If you haven't wrapped a module, that doesn't get created. >>> >>> Gaetan >>> >>> On Tue, Mar 28, 2017 at 12:28 PM, Austin Herrema >>> wrote: >>> >>>> Gaetan, >>>> >>>> Thank you for this. With your help, I think I am getting close to >>>> getting this to work for my case. At the moment, I am hung up on the line >>>> of your makefile which reads "$(FF90) $(FF90_ALL_FLAGS) -I$(MAIN_DIR)/mod >>>> -c warpustruct-f2pywrappers2.f90". Am I correct that >>>> warpustruct-f2pywrappers2.f90 should be generated by f2py? If so, do you >>>> (or does anyone else) know the command for telling f2py to do so? At the >>>> moment I am using: >>>> >>>> f2py run_analysis.f90 -m run_analysis -h run_analysis.pyf >>>> >>>> to get the requisite .pyf and .c files, but no .f90 file. If I am wrong >>>> about the origin of this file, please do tell me! >>>> >>>> Thank you, >>>> Austin >>>> >>>> On Mon, Mar 27, 2017 at 5:13 PM, Gaetan Kenway >>>> wrote: >>>> >>>>> Austin >>>>> >>>>> Here is the full makefile for a code we use. The variables defined >>>>> externally in a separate config file are: >>>>> $(FF90) >>>>> $(FF90_FLAGS) >>>>> $(LIBDIR) >>>>> $(PETSC_LINKER_FLAGS) >>>>> $(LINKER_FLAGS) >>>>> $(CGNS_LINKER_FLAGS) >>>>> >>>>> $(PYTHON) >>>>> $(PYTHON-CONIFG) >>>>> $(F2PY) >>>>> (These are usually use python, python-config and f2py. You can >>>>> overwrite as necessary) >>>>> >>>>> $(CC) >>>>> $(CC_ALL_FLAGS) >>>>> >>>>> This essentially just mimics what f2py does automatically but we found >>>>> it easier to control exactly what is going on. Essentially you are just >>>>> compiling exactly as you normally an executable, but instead make a .so >>>>> (with the -shared option) and including the additional .o files generated >>>>> by compiling the f2py-generated wrappers. >>>>> >>>>> Hope this helps, >>>>> Gaetan >>>>> >>>>> On Sat, Mar 25, 2017 at 5:38 AM, Lisandro Dalcin >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On 22 March 2017 at 20:29, Barry Smith wrote: >>>>>> >>>>>>> >>>>>>> Lisandro, >>>>>>> >>>>>>> We've had a couple questions similar to this with f2py; is there >>>>>>> a way we could add to the PETSc/SLEPc makefile rules something to allow >>>>>>> people to trivially use f2py without having to make their own (often >>>>>>> incorrect) manual command lines? >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> >>>>>> Barry, it is quite hard and hacky to get f2py working in the general >>>>>> case. I think the email from Gaetan in this thread proves my point. >>>>>> >>>>>> IMHO, it is easier to write a small Fortran source exposing the API >>>>>> to call using ISO_C_BINDINGS, then wrap that code with the more traditional >>>>>> C-based "static" tools (SWIG, Cython) or even "dynamically" with ctypes or >>>>>> cffi (which use dlopen'ing). >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Lisandro Dalcin >>>>>> ============ >>>>>> Research Scientist >>>>>> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) >>>>>> Extreme Computing Research Center (ECRC) >>>>>> King Abdullah University of Science and Technology (KAUST) >>>>>> http://ecrc.kaust.edu.sa/ >>>>>> >>>>>> 4700 King Abdullah University of Science and Technology >>>>>> al-Khawarizmi Bldg (Bldg 1), Office # 0109 >>>>>> Thuwal 23955-6900, Kingdom of Saudi Arabia >>>>>> http://www.kaust.edu.sa >>>>>> >>>>>> Office Phone: +966 12 808-0459 <+966%2012%20808%200459> >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> *Austin Herrema* >>>> PhD Student | Graduate Research Assistant | Iowa State University >>>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>>> >>> >>> >> >> >> -- >> *Austin Herrema* >> PhD Student | Graduate Research Assistant | Iowa State University >> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >> > > -- *Austin Herrema* PhD Student | Graduate Research Assistant | Iowa State University Wind Energy Science, Engineering, and Policy | Mechanical Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: makefile Type: application/octet-stream Size: 2328 bytes Desc: not available URL: From aherrema at iastate.edu Wed Mar 29 17:12:19 2017 From: aherrema at iastate.edu (Austin Herrema) Date: Wed, 29 Mar 2017 17:12:19 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: Quick update on this issue in case it brings any other thoughts/ideas to light. For a very simple, small problem, I am successfully able to use MatSeqAIJSetPreallocation in a fortran-based code compiled for python via f2py. I am still unsure why, in a larger code, this particular function call fails when the code is executed in python (on a setup that runs fine under pure Fortran). Does the error " nnz cannot be greater than row length: local row 2 value 1330400321 rowlength 37065" imply that the program thinks I am trying to allocate 1330400321 nonzeros in a row of max length 37065? That is obviously not my intent nor what I think I have coded. I am trying to skip preallocation and use merely MatSetUp but, as we would expect, the dynamic allocation is ridiculously slow... On Wed, Mar 29, 2017 at 11:27 AM, Austin Herrema wrote: > Got it--just had to link against other compiled source, as you said. I've > attached my makefile for doing everything (including variable definitions, > compiling source, and running requisite f2py commands) in case that's > helpful for anyone else trying to do something similar. But obviously the > meat of it is in what Gaetan provided. > > I am now able to successfully run simple PETSc-based fortran codes in > python. For a larger, more complex code, I am getting some PETSc errors > when running in python that I don't normally get. In particular, > preallocation is failing--the relevant fortran code block and PETSc error > is below. > > > call MatCreate(PETSC_COMM_WORLD, LHS_pc, pc_ier) > call MatSetSizes(LHS_pc, PETSC_DECIDE, PETSC_DECIDE, NSD*FUN%NNODE, > NSD*FUN%NNODE, pc_ier) > call MatSetFromOptions(LHS_pc, pc_ier) > call MatSeqAIJSetPreallocation(LHS_pc, 500, PETSC_NULL_INTEGER, > pc_ier) > > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Argument out of range > [0]PETSC ERROR: nnz cannot be greater than row length: local row 2 value > 1330400321 rowlength 37065 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 > [0]PETSC ERROR: Unknown Name on a real named austin-ethernet.student. > iastate.edu by Austin Wed Mar 29 10:59:33 2017 > [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc > CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 > FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 > --with-openmp=0 --with-debugging=1 --with-ssl=0 > --with-superlu_dist-include=/usr/local/opt/superlu_dist/include > --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib > -lsuperlu_dist" --with-fftw-dir=/usr/local/opt/fftw > --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse > --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis > --with-parmetis-dir=/usr/local/opt/parmetis --with-scalapack-dir=/usr/local/opt/scalapack > --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 > --prefix=/usr/local/Cellar/petsc/3.7.5/real --with-scalar-type=real > --with-hypre-dir=/usr/local/opt/hypre --with-sundials-dir=/usr/local/opt/sundials > --with-hwloc-dir=/usr/local/opt/hwloc > [0]PETSC ERROR: #1 MatSeqAIJSetPreallocation_SeqAIJ() line 3598 in > /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/ > mat/impls/aij/seq/aij.c > [0]PETSC ERROR: #2 MatSeqAIJSetPreallocation() line 3570 in > /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/ > mat/impls/aij/seq/aij.c > > > Is there anything about the MatSeqAIJSetPreallocation function that would > make it not work correctly in Python even though everything else seems to > work properly? If anyone has thoughts on this that would be great. But, > again, I do realize I'm venturing into potentially unsupported territory. > > > On Tue, Mar 28, 2017 at 4:53 PM, Gaetan Kenway wrote: > >> Looks like it isn't finding your source from run_analysis.f90. You still >> need to compile that yourself and include in the final link. In my example, >> all the "original" source code was precompiled into a library from a >> different makefile and then this was run after-the-fact. >> >> Gaetan >> >> On Tue, Mar 28, 2017 at 2:38 PM, Austin Herrema >> wrote: >> >>> Gotcha. In that case, it seems I should be good without that line. I've >>> gotten the compile to succeed, but upon attempting to import the module I >>> get the following: >>> >>> >>> import run_analysis_final >>> Traceback (most recent call last): >>> File "", line 1, in >>> ImportError: dlopen(./run_analysis_final.so, 2): Symbol not found: >>> _run_analysis_ >>> Referenced from: ./run_analysis_final.so >>> Expected in: flat namespace >>> in ./run_analysis_final.so >>> >>> Seems I may have gotten the linking wrong somehow. Will keep searching, >>> but the simplified makefile that I used is attached in case anyone thinks >>> they might be able to spot the issue in it. That said, I do realize that >>> this may be starting to reach beyond the scope of this mailing list so feel >>> free to ignore... >>> >>> On Tue, Mar 28, 2017 at 2:31 PM, Gaetan Kenway >>> wrote: >>> >>>> You only get that file if you have wrapped a module explicitly in the >>>> .pyf file. If you haven't wrapped a module, that doesn't get created. >>>> >>>> Gaetan >>>> >>>> On Tue, Mar 28, 2017 at 12:28 PM, Austin Herrema >>>> wrote: >>>> >>>>> Gaetan, >>>>> >>>>> Thank you for this. With your help, I think I am getting close to >>>>> getting this to work for my case. At the moment, I am hung up on the line >>>>> of your makefile which reads "$(FF90) $(FF90_ALL_FLAGS) -I$(MAIN_DIR)/mod >>>>> -c warpustruct-f2pywrappers2.f90". Am I correct that >>>>> warpustruct-f2pywrappers2.f90 should be generated by f2py? If so, do you >>>>> (or does anyone else) know the command for telling f2py to do so? At the >>>>> moment I am using: >>>>> >>>>> f2py run_analysis.f90 -m run_analysis -h run_analysis.pyf >>>>> >>>>> to get the requisite .pyf and .c files, but no .f90 file. If I am >>>>> wrong about the origin of this file, please do tell me! >>>>> >>>>> Thank you, >>>>> Austin >>>>> >>>>> On Mon, Mar 27, 2017 at 5:13 PM, Gaetan Kenway >>>>> wrote: >>>>> >>>>>> Austin >>>>>> >>>>>> Here is the full makefile for a code we use. The variables defined >>>>>> externally in a separate config file are: >>>>>> $(FF90) >>>>>> $(FF90_FLAGS) >>>>>> $(LIBDIR) >>>>>> $(PETSC_LINKER_FLAGS) >>>>>> $(LINKER_FLAGS) >>>>>> $(CGNS_LINKER_FLAGS) >>>>>> >>>>>> $(PYTHON) >>>>>> $(PYTHON-CONIFG) >>>>>> $(F2PY) >>>>>> (These are usually use python, python-config and f2py. You can >>>>>> overwrite as necessary) >>>>>> >>>>>> $(CC) >>>>>> $(CC_ALL_FLAGS) >>>>>> >>>>>> This essentially just mimics what f2py does automatically but we >>>>>> found it easier to control exactly what is going on. Essentially you are >>>>>> just compiling exactly as you normally an executable, but instead make a >>>>>> .so (with the -shared option) and including the additional .o files >>>>>> generated by compiling the f2py-generated wrappers. >>>>>> >>>>>> Hope this helps, >>>>>> Gaetan >>>>>> >>>>>> On Sat, Mar 25, 2017 at 5:38 AM, Lisandro Dalcin >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On 22 March 2017 at 20:29, Barry Smith wrote: >>>>>>> >>>>>>>> >>>>>>>> Lisandro, >>>>>>>> >>>>>>>> We've had a couple questions similar to this with f2py; is >>>>>>>> there a way we could add to the PETSc/SLEPc makefile rules something to >>>>>>>> allow people to trivially use f2py without having to make their own (often >>>>>>>> incorrect) manual command lines? >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>>> >>>>>>> Barry, it is quite hard and hacky to get f2py working in the general >>>>>>> case. I think the email from Gaetan in this thread proves my point. >>>>>>> >>>>>>> IMHO, it is easier to write a small Fortran source exposing the API >>>>>>> to call using ISO_C_BINDINGS, then wrap that code with the more traditional >>>>>>> C-based "static" tools (SWIG, Cython) or even "dynamically" with ctypes or >>>>>>> cffi (which use dlopen'ing). >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Lisandro Dalcin >>>>>>> ============ >>>>>>> Research Scientist >>>>>>> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) >>>>>>> Extreme Computing Research Center (ECRC) >>>>>>> King Abdullah University of Science and Technology (KAUST) >>>>>>> http://ecrc.kaust.edu.sa/ >>>>>>> >>>>>>> 4700 King Abdullah University of Science and Technology >>>>>>> al-Khawarizmi Bldg (Bldg 1), Office # 0109 >>>>>>> Thuwal 23955-6900, Kingdom of Saudi Arabia >>>>>>> http://www.kaust.edu.sa >>>>>>> >>>>>>> Office Phone: +966 12 808-0459 <+966%2012%20808%200459> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> *Austin Herrema* >>>>> PhD Student | Graduate Research Assistant | Iowa State University >>>>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>>>> >>>> >>>> >>> >>> >>> -- >>> *Austin Herrema* >>> PhD Student | Graduate Research Assistant | Iowa State University >>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>> >> >> > > > -- > *Austin Herrema* > PhD Student | Graduate Research Assistant | Iowa State University > Wind Energy Science, Engineering, and Policy | Mechanical Engineering > -- *Austin Herrema* PhD Student | Graduate Research Assistant | Iowa State University Wind Energy Science, Engineering, and Policy | Mechanical Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetank at gmail.com Wed Mar 29 17:18:23 2017 From: gaetank at gmail.com (Gaetan Kenway) Date: Wed, 29 Mar 2017 15:18:23 -0700 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: Any chance it is a 8 byte/4byte integer issue? Gaetan On Wed, Mar 29, 2017 at 3:12 PM, Austin Herrema wrote: > Quick update on this issue in case it brings any other thoughts/ideas to > light. For a very simple, small problem, I am successfully able to use > MatSeqAIJSetPreallocation in a fortran-based code compiled for python via > f2py. I am still unsure why, in a larger code, this particular function > call fails when the code is executed in python (on a setup that runs fine > under pure Fortran). Does the error " nnz cannot be greater than row > length: local row 2 value 1330400321 rowlength 37065" imply that the > program thinks I am trying to allocate 1330400321 nonzeros in a row of > max length 37065? That is obviously not my intent nor what I think I have > coded. I am trying to skip preallocation and use merely MatSetUp but, as we > would expect, the dynamic allocation is ridiculously slow... > > On Wed, Mar 29, 2017 at 11:27 AM, Austin Herrema > wrote: > >> Got it--just had to link against other compiled source, as you said. I've >> attached my makefile for doing everything (including variable definitions, >> compiling source, and running requisite f2py commands) in case that's >> helpful for anyone else trying to do something similar. But obviously the >> meat of it is in what Gaetan provided. >> >> I am now able to successfully run simple PETSc-based fortran codes in >> python. For a larger, more complex code, I am getting some PETSc errors >> when running in python that I don't normally get. In particular, >> preallocation is failing--the relevant fortran code block and PETSc error >> is below. >> >> >> call MatCreate(PETSC_COMM_WORLD, LHS_pc, pc_ier) >> call MatSetSizes(LHS_pc, PETSC_DECIDE, PETSC_DECIDE, NSD*FUN%NNODE, >> NSD*FUN%NNODE, pc_ier) >> call MatSetFromOptions(LHS_pc, pc_ier) >> call MatSeqAIJSetPreallocation(LHS_pc, 500, PETSC_NULL_INTEGER, >> pc_ier) >> >> >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Argument out of range >> [0]PETSC ERROR: nnz cannot be greater than row length: local row 2 value >> 1330400321 rowlength 37065 >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 >> [0]PETSC ERROR: Unknown Name on a real named >> austin-ethernet.student.iastate.edu by Austin Wed Mar 29 10:59:33 2017 >> [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc >> CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 >> FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 >> --with-openmp=0 --with-debugging=1 --with-ssl=0 >> --with-superlu_dist-include=/usr/local/opt/superlu_dist/include >> --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib >> -lsuperlu_dist" --with-fftw-dir=/usr/local/opt/fftw >> --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse >> --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis >> --with-parmetis-dir=/usr/local/opt/parmetis >> --with-scalapack-dir=/usr/local/opt/scalapack >> --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 >> --prefix=/usr/local/Cellar/petsc/3.7.5/real --with-scalar-type=real >> --with-hypre-dir=/usr/local/opt/hypre --with-sundials-dir=/usr/local/opt/sundials >> --with-hwloc-dir=/usr/local/opt/hwloc >> [0]PETSC ERROR: #1 MatSeqAIJSetPreallocation_SeqAIJ() line 3598 in >> /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/ >> impls/aij/seq/aij.c >> [0]PETSC ERROR: #2 MatSeqAIJSetPreallocation() line 3570 in >> /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/ >> impls/aij/seq/aij.c >> >> >> Is there anything about the MatSeqAIJSetPreallocation function that would >> make it not work correctly in Python even though everything else seems to >> work properly? If anyone has thoughts on this that would be great. But, >> again, I do realize I'm venturing into potentially unsupported territory. >> >> >> On Tue, Mar 28, 2017 at 4:53 PM, Gaetan Kenway wrote: >> >>> Looks like it isn't finding your source from run_analysis.f90. You still >>> need to compile that yourself and include in the final link. In my example, >>> all the "original" source code was precompiled into a library from a >>> different makefile and then this was run after-the-fact. >>> >>> Gaetan >>> >>> On Tue, Mar 28, 2017 at 2:38 PM, Austin Herrema >>> wrote: >>> >>>> Gotcha. In that case, it seems I should be good without that line. I've >>>> gotten the compile to succeed, but upon attempting to import the module I >>>> get the following: >>>> >>>> >>> import run_analysis_final >>>> Traceback (most recent call last): >>>> File "", line 1, in >>>> ImportError: dlopen(./run_analysis_final.so, 2): Symbol not found: >>>> _run_analysis_ >>>> Referenced from: ./run_analysis_final.so >>>> Expected in: flat namespace >>>> in ./run_analysis_final.so >>>> >>>> Seems I may have gotten the linking wrong somehow. Will keep searching, >>>> but the simplified makefile that I used is attached in case anyone thinks >>>> they might be able to spot the issue in it. That said, I do realize that >>>> this may be starting to reach beyond the scope of this mailing list so feel >>>> free to ignore... >>>> >>>> On Tue, Mar 28, 2017 at 2:31 PM, Gaetan Kenway >>>> wrote: >>>> >>>>> You only get that file if you have wrapped a module explicitly in the >>>>> .pyf file. If you haven't wrapped a module, that doesn't get created. >>>>> >>>>> Gaetan >>>>> >>>>> On Tue, Mar 28, 2017 at 12:28 PM, Austin Herrema >>>> > wrote: >>>>> >>>>>> Gaetan, >>>>>> >>>>>> Thank you for this. With your help, I think I am getting close to >>>>>> getting this to work for my case. At the moment, I am hung up on the line >>>>>> of your makefile which reads "$(FF90) $(FF90_ALL_FLAGS) -I$(MAIN_DIR)/mod >>>>>> -c warpustruct-f2pywrappers2.f90". Am I correct that >>>>>> warpustruct-f2pywrappers2.f90 should be generated by f2py? If so, do you >>>>>> (or does anyone else) know the command for telling f2py to do so? At the >>>>>> moment I am using: >>>>>> >>>>>> f2py run_analysis.f90 -m run_analysis -h run_analysis.pyf >>>>>> >>>>>> to get the requisite .pyf and .c files, but no .f90 file. If I am >>>>>> wrong about the origin of this file, please do tell me! >>>>>> >>>>>> Thank you, >>>>>> Austin >>>>>> >>>>>> On Mon, Mar 27, 2017 at 5:13 PM, Gaetan Kenway >>>>>> wrote: >>>>>> >>>>>>> Austin >>>>>>> >>>>>>> Here is the full makefile for a code we use. The variables defined >>>>>>> externally in a separate config file are: >>>>>>> $(FF90) >>>>>>> $(FF90_FLAGS) >>>>>>> $(LIBDIR) >>>>>>> $(PETSC_LINKER_FLAGS) >>>>>>> $(LINKER_FLAGS) >>>>>>> $(CGNS_LINKER_FLAGS) >>>>>>> >>>>>>> $(PYTHON) >>>>>>> $(PYTHON-CONIFG) >>>>>>> $(F2PY) >>>>>>> (These are usually use python, python-config and f2py. You can >>>>>>> overwrite as necessary) >>>>>>> >>>>>>> $(CC) >>>>>>> $(CC_ALL_FLAGS) >>>>>>> >>>>>>> This essentially just mimics what f2py does automatically but we >>>>>>> found it easier to control exactly what is going on. Essentially you are >>>>>>> just compiling exactly as you normally an executable, but instead make a >>>>>>> .so (with the -shared option) and including the additional .o files >>>>>>> generated by compiling the f2py-generated wrappers. >>>>>>> >>>>>>> Hope this helps, >>>>>>> Gaetan >>>>>>> >>>>>>> On Sat, Mar 25, 2017 at 5:38 AM, Lisandro Dalcin >>>>>>> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 22 March 2017 at 20:29, Barry Smith wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> Lisandro, >>>>>>>>> >>>>>>>>> We've had a couple questions similar to this with f2py; is >>>>>>>>> there a way we could add to the PETSc/SLEPc makefile rules something to >>>>>>>>> allow people to trivially use f2py without having to make their own (often >>>>>>>>> incorrect) manual command lines? >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> >>>>>>>>> >>>>>>>> Barry, it is quite hard and hacky to get f2py working in the >>>>>>>> general case. I think the email from Gaetan in this thread proves my point. >>>>>>>> >>>>>>>> IMHO, it is easier to write a small Fortran source exposing the API >>>>>>>> to call using ISO_C_BINDINGS, then wrap that code with the more traditional >>>>>>>> C-based "static" tools (SWIG, Cython) or even "dynamically" with ctypes or >>>>>>>> cffi (which use dlopen'ing). >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Lisandro Dalcin >>>>>>>> ============ >>>>>>>> Research Scientist >>>>>>>> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) >>>>>>>> Extreme Computing Research Center (ECRC) >>>>>>>> King Abdullah University of Science and Technology (KAUST) >>>>>>>> http://ecrc.kaust.edu.sa/ >>>>>>>> >>>>>>>> 4700 King Abdullah University of Science and Technology >>>>>>>> al-Khawarizmi Bldg (Bldg 1), Office # 0109 >>>>>>>> Thuwal 23955-6900, Kingdom of Saudi Arabia >>>>>>>> http://www.kaust.edu.sa >>>>>>>> >>>>>>>> Office Phone: +966 12 808-0459 <+966%2012%20808%200459> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> *Austin Herrema* >>>>>> PhD Student | Graduate Research Assistant | Iowa State University >>>>>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> *Austin Herrema* >>>> PhD Student | Graduate Research Assistant | Iowa State University >>>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>>> >>> >>> >> >> >> -- >> *Austin Herrema* >> PhD Student | Graduate Research Assistant | Iowa State University >> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >> > > > > -- > *Austin Herrema* > PhD Student | Graduate Research Assistant | Iowa State University > Wind Energy Science, Engineering, and Policy | Mechanical Engineering > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 29 18:48:10 2017 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 29 Mar 2017 18:48:10 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: On Wed, Mar 29, 2017 at 5:12 PM, Austin Herrema wrote: > Quick update on this issue in case it brings any other thoughts/ideas to > light. For a very simple, small problem, I am successfully able to use > MatSeqAIJSetPreallocation in a fortran-based code compiled for python via > f2py. I am still unsure why, in a larger code, this particular function > call fails when the code is executed in python (on a setup that runs fine > under pure Fortran). Does the error " nnz cannot be greater than row > length: local row 2 value 1330400321 rowlength 37065" imply that the > program thinks I am trying to allocate 1330400321 nonzeros in a row of > max length 37065? > Yes. Thanks, Matt > That is obviously not my intent nor what I think I have coded. I am trying > to skip preallocation and use merely MatSetUp but, as we would expect, the > dynamic allocation is ridiculously slow... > > On Wed, Mar 29, 2017 at 11:27 AM, Austin Herrema > wrote: > >> Got it--just had to link against other compiled source, as you said. I've >> attached my makefile for doing everything (including variable definitions, >> compiling source, and running requisite f2py commands) in case that's >> helpful for anyone else trying to do something similar. But obviously the >> meat of it is in what Gaetan provided. >> >> I am now able to successfully run simple PETSc-based fortran codes in >> python. For a larger, more complex code, I am getting some PETSc errors >> when running in python that I don't normally get. In particular, >> preallocation is failing--the relevant fortran code block and PETSc error >> is below. >> >> >> call MatCreate(PETSC_COMM_WORLD, LHS_pc, pc_ier) >> call MatSetSizes(LHS_pc, PETSC_DECIDE, PETSC_DECIDE, NSD*FUN%NNODE, >> NSD*FUN%NNODE, pc_ier) >> call MatSetFromOptions(LHS_pc, pc_ier) >> call MatSeqAIJSetPreallocation(LHS_pc, 500, PETSC_NULL_INTEGER, >> pc_ier) >> >> >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Argument out of range >> [0]PETSC ERROR: nnz cannot be greater than row length: local row 2 value >> 1330400321 rowlength 37065 >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 >> [0]PETSC ERROR: Unknown Name on a real named >> austin-ethernet.student.iastate.edu by Austin Wed Mar 29 10:59:33 2017 >> [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc >> CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 >> FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 >> --with-openmp=0 --with-debugging=1 --with-ssl=0 >> --with-superlu_dist-include=/usr/local/opt/superlu_dist/include >> --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib >> -lsuperlu_dist" --with-fftw-dir=/usr/local/opt/fftw >> --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse >> --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis >> --with-parmetis-dir=/usr/local/opt/parmetis >> --with-scalapack-dir=/usr/local/opt/scalapack >> --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 >> --prefix=/usr/local/Cellar/petsc/3.7.5/real --with-scalar-type=real >> --with-hypre-dir=/usr/local/opt/hypre --with-sundials-dir=/usr/local/opt/sundials >> --with-hwloc-dir=/usr/local/opt/hwloc >> [0]PETSC ERROR: #1 MatSeqAIJSetPreallocation_SeqAIJ() line 3598 in >> /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/ >> impls/aij/seq/aij.c >> [0]PETSC ERROR: #2 MatSeqAIJSetPreallocation() line 3570 in >> /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/ >> impls/aij/seq/aij.c >> >> >> Is there anything about the MatSeqAIJSetPreallocation function that would >> make it not work correctly in Python even though everything else seems to >> work properly? If anyone has thoughts on this that would be great. But, >> again, I do realize I'm venturing into potentially unsupported territory. >> >> >> On Tue, Mar 28, 2017 at 4:53 PM, Gaetan Kenway wrote: >> >>> Looks like it isn't finding your source from run_analysis.f90. You still >>> need to compile that yourself and include in the final link. In my example, >>> all the "original" source code was precompiled into a library from a >>> different makefile and then this was run after-the-fact. >>> >>> Gaetan >>> >>> On Tue, Mar 28, 2017 at 2:38 PM, Austin Herrema >>> wrote: >>> >>>> Gotcha. In that case, it seems I should be good without that line. I've >>>> gotten the compile to succeed, but upon attempting to import the module I >>>> get the following: >>>> >>>> >>> import run_analysis_final >>>> Traceback (most recent call last): >>>> File "", line 1, in >>>> ImportError: dlopen(./run_analysis_final.so, 2): Symbol not found: >>>> _run_analysis_ >>>> Referenced from: ./run_analysis_final.so >>>> Expected in: flat namespace >>>> in ./run_analysis_final.so >>>> >>>> Seems I may have gotten the linking wrong somehow. Will keep searching, >>>> but the simplified makefile that I used is attached in case anyone thinks >>>> they might be able to spot the issue in it. That said, I do realize that >>>> this may be starting to reach beyond the scope of this mailing list so feel >>>> free to ignore... >>>> >>>> On Tue, Mar 28, 2017 at 2:31 PM, Gaetan Kenway >>>> wrote: >>>> >>>>> You only get that file if you have wrapped a module explicitly in the >>>>> .pyf file. If you haven't wrapped a module, that doesn't get created. >>>>> >>>>> Gaetan >>>>> >>>>> On Tue, Mar 28, 2017 at 12:28 PM, Austin Herrema >>>> > wrote: >>>>> >>>>>> Gaetan, >>>>>> >>>>>> Thank you for this. With your help, I think I am getting close to >>>>>> getting this to work for my case. At the moment, I am hung up on the line >>>>>> of your makefile which reads "$(FF90) $(FF90_ALL_FLAGS) -I$(MAIN_DIR)/mod >>>>>> -c warpustruct-f2pywrappers2.f90". Am I correct that >>>>>> warpustruct-f2pywrappers2.f90 should be generated by f2py? If so, do you >>>>>> (or does anyone else) know the command for telling f2py to do so? At the >>>>>> moment I am using: >>>>>> >>>>>> f2py run_analysis.f90 -m run_analysis -h run_analysis.pyf >>>>>> >>>>>> to get the requisite .pyf and .c files, but no .f90 file. If I am >>>>>> wrong about the origin of this file, please do tell me! >>>>>> >>>>>> Thank you, >>>>>> Austin >>>>>> >>>>>> On Mon, Mar 27, 2017 at 5:13 PM, Gaetan Kenway >>>>>> wrote: >>>>>> >>>>>>> Austin >>>>>>> >>>>>>> Here is the full makefile for a code we use. The variables defined >>>>>>> externally in a separate config file are: >>>>>>> $(FF90) >>>>>>> $(FF90_FLAGS) >>>>>>> $(LIBDIR) >>>>>>> $(PETSC_LINKER_FLAGS) >>>>>>> $(LINKER_FLAGS) >>>>>>> $(CGNS_LINKER_FLAGS) >>>>>>> >>>>>>> $(PYTHON) >>>>>>> $(PYTHON-CONIFG) >>>>>>> $(F2PY) >>>>>>> (These are usually use python, python-config and f2py. You can >>>>>>> overwrite as necessary) >>>>>>> >>>>>>> $(CC) >>>>>>> $(CC_ALL_FLAGS) >>>>>>> >>>>>>> This essentially just mimics what f2py does automatically but we >>>>>>> found it easier to control exactly what is going on. Essentially you are >>>>>>> just compiling exactly as you normally an executable, but instead make a >>>>>>> .so (with the -shared option) and including the additional .o files >>>>>>> generated by compiling the f2py-generated wrappers. >>>>>>> >>>>>>> Hope this helps, >>>>>>> Gaetan >>>>>>> >>>>>>> On Sat, Mar 25, 2017 at 5:38 AM, Lisandro Dalcin >>>>>>> wrote: >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On 22 March 2017 at 20:29, Barry Smith wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> Lisandro, >>>>>>>>> >>>>>>>>> We've had a couple questions similar to this with f2py; is >>>>>>>>> there a way we could add to the PETSc/SLEPc makefile rules something to >>>>>>>>> allow people to trivially use f2py without having to make their own (often >>>>>>>>> incorrect) manual command lines? >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> >>>>>>>>> >>>>>>>> Barry, it is quite hard and hacky to get f2py working in the >>>>>>>> general case. I think the email from Gaetan in this thread proves my point. >>>>>>>> >>>>>>>> IMHO, it is easier to write a small Fortran source exposing the API >>>>>>>> to call using ISO_C_BINDINGS, then wrap that code with the more traditional >>>>>>>> C-based "static" tools (SWIG, Cython) or even "dynamically" with ctypes or >>>>>>>> cffi (which use dlopen'ing). >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Lisandro Dalcin >>>>>>>> ============ >>>>>>>> Research Scientist >>>>>>>> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) >>>>>>>> Extreme Computing Research Center (ECRC) >>>>>>>> King Abdullah University of Science and Technology (KAUST) >>>>>>>> http://ecrc.kaust.edu.sa/ >>>>>>>> >>>>>>>> 4700 King Abdullah University of Science and Technology >>>>>>>> al-Khawarizmi Bldg (Bldg 1), Office # 0109 >>>>>>>> Thuwal 23955-6900, Kingdom of Saudi Arabia >>>>>>>> http://www.kaust.edu.sa >>>>>>>> >>>>>>>> Office Phone: +966 12 808-0459 <+966%2012%20808%200459> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> *Austin Herrema* >>>>>> PhD Student | Graduate Research Assistant | Iowa State University >>>>>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> *Austin Herrema* >>>> PhD Student | Graduate Research Assistant | Iowa State University >>>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>>> >>> >>> >> >> >> -- >> *Austin Herrema* >> PhD Student | Graduate Research Assistant | Iowa State University >> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >> > > > > -- > *Austin Herrema* > PhD Student | Graduate Research Assistant | Iowa State University > Wind Energy Science, Engineering, and Policy | Mechanical Engineering > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From toon.weyens at gmail.com Thu Mar 30 02:27:18 2017 From: toon.weyens at gmail.com (Toon Weyens) Date: Thu, 30 Mar 2017 07:27:18 +0000 Subject: [petsc-users] Slepc JD and GD converge to wrong eigenpair In-Reply-To: References: <65A0A5E7-399B-4D19-A967-73765A96DB98@dsic.upv.es> Message-ID: Hi, thanks for the answer. I use MUMPS as a PC. The options -ksp_converged_reason, -ksp_monitor_true_residual and -ksp_view are not used. The difference between the log_view outputs of running a simple solution with 1, 2, 3 or 4 MPI procs is attached (debug version). I can see that with 2 procs it takes about 22 seconds, versus 7 seconds for 1 proc. For 3 and 4 the situation is worse: 29 and 37 seconds. Looks like the difference is mainly in the BVmult and especially in the BVorthogonalize routines: BVmult takes 1, 6.5, 10 or even a whopping 17 seconds for the different number of proceses BVorthogonalize takes 1, 4, 6, 10. Calculating the preconditioner does not take more time for different number of proceses, and applying it only slightly increases. So it cannot be mumps' fault... Does this makes sense? Is there any way to improve this? Thanks! On Wed, Mar 29, 2017 at 3:20 PM Matthew Knepley wrote: On Wed, Mar 29, 2017 at 6:58 AM, Toon Weyens wrote: Dear Jose, Thanks for the answer. I am looking for the smallest real, indeed. I have, just now, accidentally figured out that I can get correct convergence by increasing NCV to higher values, so that's covered! I thought I had checked this before, but apparently not. It's converging well now, and rather fast (still about 8 times faster than Krylov-Schur). The issue now is that it scales rather badly: If I use 2 or more MPI processes, the time required to solve it goes up drastically. A small test case, on my Ubuntu 16.04 laptop, takes 10 seconds (blazing fast) for 1 MPI process, 25 for 2, 33 for 3, 59 for 4, etc... It is a machine with 8 cores, so i don't really understand why this is. For any scalability question, we need to see the output of -log_view -ksp_view -ksp_monitor_true_residual -ksp_converged_reason and other EPS options which I forget unfortunately. What seems likely here is that you are using a PC which is not scalable, so iteration would be going up. Thanks, Matt Are there other methods that can actually maintain the time required to solve for multiple MPI process? Or, preferable, decrease it (why else would I use multiple processes if not for memory restrictions)? I will never have to do something bigger than a generalized non-Hermitian ev problem of, let's say, 5000 blocks of 200x200 complex values per block, and a band size of about 11 blocks wide (so a few GB per matrix max). Thanks so much! On Wed, Mar 29, 2017 at 9:54 AM Jose E. Roman wrote: > El 29 mar 2017, a las 9:08, Toon Weyens escribi?: > > I started looking for alternatives from the standard Krylov-Schur method to solve the generalized eigenvalue problem Ax = kBx in my code. These matrices have a block-band structure (typically 5, 7 or 9 blocks wide, with block sizes of the order 20) of size typically 1000 blocks. This eigenvalue problem results from the minimization of the energy of a perturbed plasma-vacuum system in order to investigate its stability. So far, I've not taken advantage of the Hermiticity of the problem. > > For "easier" problems, especially the Generalized Davidson method converges like lightning, sometimes up to 100 times faster than Krylov-Schur. > > However, for slightly more complicated problems, GD converges to the wrong eigenpair: There is certainly an eigenpair with an eigenvalue lower than 0 (i.e. unstable), but the solver never gets below some small, positive value, to which it wrongly converges. I would need to know the settings you are using. Are you doing smallest_real? Maybe you can try target_magnitude with harmonic extraction. > > Is it possible to improve this behavior? I tried changing the preconditioner, but it did not work. > > Might it be possible to use Krylov-Schur until reaching some precision, and then switching to JD to quickly converge? Yes, you can do this, using EPSSetInitialSpace() in the second solve. But, depending on the settings, this may not buy you much. Jose > > Thanks! -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 1_procs Type: application/octet-stream Size: 13689 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2_procs Type: application/octet-stream Size: 13846 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 3_procs Type: application/octet-stream Size: 13855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 4_procs Type: application/octet-stream Size: 13855 bytes Desc: not available URL: From jroman at dsic.upv.es Thu Mar 30 03:05:19 2017 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 30 Mar 2017 10:05:19 +0200 Subject: [petsc-users] Slepc JD and GD converge to wrong eigenpair In-Reply-To: References: <65A0A5E7-399B-4D19-A967-73765A96DB98@dsic.upv.es> Message-ID: <2A5BFE40-C401-42CA-944A-9008E57B55EB@dsic.upv.es> > El 30 mar 2017, a las 9:27, Toon Weyens escribi?: > > Hi, thanks for the answer. > > I use MUMPS as a PC. The options -ksp_converged_reason, -ksp_monitor_true_residual and -ksp_view are not used. > > The difference between the log_view outputs of running a simple solution with 1, 2, 3 or 4 MPI procs is attached (debug version). > > I can see that with 2 procs it takes about 22 seconds, versus 7 seconds for 1 proc. For 3 and 4 the situation is worse: 29 and 37 seconds. > > Looks like the difference is mainly in the BVmult and especially in the BVorthogonalize routines: > > BVmult takes 1, 6.5, 10 or even a whopping 17 seconds for the different number of proceses > BVorthogonalize takes 1, 4, 6, 10. > > Calculating the preconditioner does not take more time for different number of proceses, and applying it only slightly increases. So it cannot be mumps' fault... > > Does this makes sense? Is there any way to improve this? > > Thanks! Cannot trust performance data in a debug build: ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## From ho at annigroup.com Thu Mar 30 03:45:00 2017 From: ho at annigroup.com (ho at annigroup.com) Date: Thu, 30 Mar 2017 16:45:00 +0800 (CST) Subject: [petsc-users] Most competitive 360 degrees 5M Panoramic wifi camera Message-ID: <451ec00e.11e46.15b1e2a6b38.Coremail.ho@annigroup.com> Hello My friend, How are you? Most competitive 360 degrees 5M Panoramic wifi camera,Anywhere,Anytime. Please reply me and get the price lists. Thanks,have a good day. Best regards , James =========================================== Shenzhen Annidigital Technology Co,.Ltd Web: www.annigroup.com/ www.aopvision.com Skype: james.annigroup Email: james at annigroup.com Whatsapp/Mob : + 86 13824129118 Factory add: Shangxue Hi-tech Industrial Park, Buji Town, Longgang District,Shenzhen, China =========================================== james at annigroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: InsertPic_C6C0(1(12-07-17-29-51).jpg Type: image/jpeg Size: 507849 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: InsertPic_331E(1(12-07-17-29-51).jpg Type: image/jpeg Size: 317764 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 5801_Catch(01-13(12-07-17-29-51).jpg Type: image/jpeg Size: 5801 bytes Desc: not available URL: From knepley at gmail.com Thu Mar 30 07:47:58 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Mar 2017 07:47:58 -0500 Subject: [petsc-users] Slepc JD and GD converge to wrong eigenpair In-Reply-To: <2A5BFE40-C401-42CA-944A-9008E57B55EB@dsic.upv.es> References: <65A0A5E7-399B-4D19-A967-73765A96DB98@dsic.upv.es> <2A5BFE40-C401-42CA-944A-9008E57B55EB@dsic.upv.es> Message-ID: On Thu, Mar 30, 2017 at 3:05 AM, Jose E. Roman wrote: > > > El 30 mar 2017, a las 9:27, Toon Weyens > escribi?: > > > > Hi, thanks for the answer. > > > > I use MUMPS as a PC. The options -ksp_converged_reason, > -ksp_monitor_true_residual and -ksp_view are not used. > > > > The difference between the log_view outputs of running a simple solution > with 1, 2, 3 or 4 MPI procs is attached (debug version). > > > > I can see that with 2 procs it takes about 22 seconds, versus 7 seconds > for 1 proc. For 3 and 4 the situation is worse: 29 and 37 seconds. > > > > Looks like the difference is mainly in the BVmult and especially in the > BVorthogonalize routines: > > > > BVmult takes 1, 6.5, 10 or even a whopping 17 seconds for the different > number of proceses > > BVorthogonalize takes 1, 4, 6, 10. > > > > Calculating the preconditioner does not take more time for different > number of proceses, and applying it only slightly increases. So it cannot > be mumps' fault... > > > > Does this makes sense? Is there any way to improve this? > > > > Thanks! > > Cannot trust performance data in a debug build: > Yes, you should definitely make another build configured using --with-debugging=no. What do you get for STREAMS on this machine make streams NP=4 >From this data, it looks like you have already saturated the bandwidth at 2 procs. Thanks, Matt > > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run ./configure # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From aherrema at iastate.edu Thu Mar 30 09:12:28 2017 From: aherrema at iastate.edu (Austin Herrema) Date: Thu, 30 Mar 2017 09:12:28 -0500 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: Indeed it did seem to be an issue with the integer of value 500 in that function call (8 byte/4 byte? Don't know...). Upon explicitly using a PetscInt variable, everything works just fine. Thank you, everyone, for your patient help! Best, Austin On Wed, Mar 29, 2017 at 6:48 PM, Matthew Knepley wrote: > On Wed, Mar 29, 2017 at 5:12 PM, Austin Herrema > wrote: > >> Quick update on this issue in case it brings any other thoughts/ideas to >> light. For a very simple, small problem, I am successfully able to use >> MatSeqAIJSetPreallocation in a fortran-based code compiled for python via >> f2py. I am still unsure why, in a larger code, this particular function >> call fails when the code is executed in python (on a setup that runs fine >> under pure Fortran). Does the error " nnz cannot be greater than row >> length: local row 2 value 1330400321 rowlength 37065" imply that the >> program thinks I am trying to allocate 1330400321 nonzeros in a row of >> max length 37065? >> > > Yes. > > Thanks, > > Matt > > >> That is obviously not my intent nor what I think I have coded. I am >> trying to skip preallocation and use merely MatSetUp but, as we would >> expect, the dynamic allocation is ridiculously slow... >> >> On Wed, Mar 29, 2017 at 11:27 AM, Austin Herrema >> wrote: >> >>> Got it--just had to link against other compiled source, as you said. >>> I've attached my makefile for doing everything (including variable >>> definitions, compiling source, and running requisite f2py commands) in case >>> that's helpful for anyone else trying to do something similar. But >>> obviously the meat of it is in what Gaetan provided. >>> >>> I am now able to successfully run simple PETSc-based fortran codes in >>> python. For a larger, more complex code, I am getting some PETSc errors >>> when running in python that I don't normally get. In particular, >>> preallocation is failing--the relevant fortran code block and PETSc error >>> is below. >>> >>> >>> call MatCreate(PETSC_COMM_WORLD, LHS_pc, pc_ier) >>> call MatSetSizes(LHS_pc, PETSC_DECIDE, PETSC_DECIDE, NSD*FUN%NNODE, >>> NSD*FUN%NNODE, pc_ier) >>> call MatSetFromOptions(LHS_pc, pc_ier) >>> call MatSeqAIJSetPreallocation(LHS_pc, 500, PETSC_NULL_INTEGER, >>> pc_ier) >>> >>> >>> [0]PETSC ERROR: --------------------- Error Message >>> -------------------------------------------------------------- >>> [0]PETSC ERROR: Argument out of range >>> [0]PETSC ERROR: nnz cannot be greater than row length: local row 2 value >>> 1330400321 rowlength 37065 >>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>> for trouble shooting. >>> [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 >>> [0]PETSC ERROR: Unknown Name on a real named >>> austin-ethernet.student.iastate.edu by Austin Wed Mar 29 10:59:33 2017 >>> [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc >>> CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 >>> FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 >>> --with-openmp=0 --with-debugging=1 --with-ssl=0 >>> --with-superlu_dist-include=/usr/local/opt/superlu_dist/include >>> --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib >>> -lsuperlu_dist" --with-fftw-dir=/usr/local/opt/fftw >>> --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse >>> --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis >>> --with-parmetis-dir=/usr/local/opt/parmetis >>> --with-scalapack-dir=/usr/local/opt/scalapack >>> --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 >>> --prefix=/usr/local/Cellar/petsc/3.7.5/real --with-scalar-type=real >>> --with-hypre-dir=/usr/local/opt/hypre --with-sundials-dir=/usr/local/opt/sundials >>> --with-hwloc-dir=/usr/local/opt/hwloc >>> [0]PETSC ERROR: #1 MatSeqAIJSetPreallocation_SeqAIJ() line 3598 in >>> /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/ >>> impls/aij/seq/aij.c >>> [0]PETSC ERROR: #2 MatSeqAIJSetPreallocation() line 3570 in >>> /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/ >>> impls/aij/seq/aij.c >>> >>> >>> Is there anything about the MatSeqAIJSetPreallocation function that >>> would make it not work correctly in Python even though everything else >>> seems to work properly? If anyone has thoughts on this that would be great. >>> But, again, I do realize I'm venturing into potentially unsupported >>> territory. >>> >>> >>> On Tue, Mar 28, 2017 at 4:53 PM, Gaetan Kenway >>> wrote: >>> >>>> Looks like it isn't finding your source from run_analysis.f90. You >>>> still need to compile that yourself and include in the final link. In my >>>> example, all the "original" source code was precompiled into a library from >>>> a different makefile and then this was run after-the-fact. >>>> >>>> Gaetan >>>> >>>> On Tue, Mar 28, 2017 at 2:38 PM, Austin Herrema >>>> wrote: >>>> >>>>> Gotcha. In that case, it seems I should be good without that line. >>>>> I've gotten the compile to succeed, but upon attempting to import the >>>>> module I get the following: >>>>> >>>>> >>> import run_analysis_final >>>>> Traceback (most recent call last): >>>>> File "", line 1, in >>>>> ImportError: dlopen(./run_analysis_final.so, 2): Symbol not found: >>>>> _run_analysis_ >>>>> Referenced from: ./run_analysis_final.so >>>>> Expected in: flat namespace >>>>> in ./run_analysis_final.so >>>>> >>>>> Seems I may have gotten the linking wrong somehow. Will keep >>>>> searching, but the simplified makefile that I used is attached in case >>>>> anyone thinks they might be able to spot the issue in it. That said, I do >>>>> realize that this may be starting to reach beyond the scope of this mailing >>>>> list so feel free to ignore... >>>>> >>>>> On Tue, Mar 28, 2017 at 2:31 PM, Gaetan Kenway >>>>> wrote: >>>>> >>>>>> You only get that file if you have wrapped a module explicitly in the >>>>>> .pyf file. If you haven't wrapped a module, that doesn't get created. >>>>>> >>>>>> Gaetan >>>>>> >>>>>> On Tue, Mar 28, 2017 at 12:28 PM, Austin Herrema < >>>>>> aherrema at iastate.edu> wrote: >>>>>> >>>>>>> Gaetan, >>>>>>> >>>>>>> Thank you for this. With your help, I think I am getting close to >>>>>>> getting this to work for my case. At the moment, I am hung up on the line >>>>>>> of your makefile which reads "$(FF90) $(FF90_ALL_FLAGS) -I$(MAIN_DIR)/mod >>>>>>> -c warpustruct-f2pywrappers2.f90". Am I correct that >>>>>>> warpustruct-f2pywrappers2.f90 should be generated by f2py? If so, do you >>>>>>> (or does anyone else) know the command for telling f2py to do so? At the >>>>>>> moment I am using: >>>>>>> >>>>>>> f2py run_analysis.f90 -m run_analysis -h run_analysis.pyf >>>>>>> >>>>>>> to get the requisite .pyf and .c files, but no .f90 file. If I am >>>>>>> wrong about the origin of this file, please do tell me! >>>>>>> >>>>>>> Thank you, >>>>>>> Austin >>>>>>> >>>>>>> On Mon, Mar 27, 2017 at 5:13 PM, Gaetan Kenway >>>>>>> wrote: >>>>>>> >>>>>>>> Austin >>>>>>>> >>>>>>>> Here is the full makefile for a code we use. The variables defined >>>>>>>> externally in a separate config file are: >>>>>>>> $(FF90) >>>>>>>> $(FF90_FLAGS) >>>>>>>> $(LIBDIR) >>>>>>>> $(PETSC_LINKER_FLAGS) >>>>>>>> $(LINKER_FLAGS) >>>>>>>> $(CGNS_LINKER_FLAGS) >>>>>>>> >>>>>>>> $(PYTHON) >>>>>>>> $(PYTHON-CONIFG) >>>>>>>> $(F2PY) >>>>>>>> (These are usually use python, python-config and f2py. You can >>>>>>>> overwrite as necessary) >>>>>>>> >>>>>>>> $(CC) >>>>>>>> $(CC_ALL_FLAGS) >>>>>>>> >>>>>>>> This essentially just mimics what f2py does automatically but we >>>>>>>> found it easier to control exactly what is going on. Essentially you are >>>>>>>> just compiling exactly as you normally an executable, but instead make a >>>>>>>> .so (with the -shared option) and including the additional .o files >>>>>>>> generated by compiling the f2py-generated wrappers. >>>>>>>> >>>>>>>> Hope this helps, >>>>>>>> Gaetan >>>>>>>> >>>>>>>> On Sat, Mar 25, 2017 at 5:38 AM, Lisandro Dalcin >>>>>>> > wrote: >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On 22 March 2017 at 20:29, Barry Smith wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> Lisandro, >>>>>>>>>> >>>>>>>>>> We've had a couple questions similar to this with f2py; is >>>>>>>>>> there a way we could add to the PETSc/SLEPc makefile rules something to >>>>>>>>>> allow people to trivially use f2py without having to make their own (often >>>>>>>>>> incorrect) manual command lines? >>>>>>>>>> >>>>>>>>>> Thanks >>>>>>>>>> >>>>>>>>>> >>>>>>>>> Barry, it is quite hard and hacky to get f2py working in the >>>>>>>>> general case. I think the email from Gaetan in this thread proves my point. >>>>>>>>> >>>>>>>>> IMHO, it is easier to write a small Fortran source exposing the >>>>>>>>> API to call using ISO_C_BINDINGS, then wrap that code with the more >>>>>>>>> traditional C-based "static" tools (SWIG, Cython) or even "dynamically" >>>>>>>>> with ctypes or cffi (which use dlopen'ing). >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Lisandro Dalcin >>>>>>>>> ============ >>>>>>>>> Research Scientist >>>>>>>>> Computer, Electrical and Mathematical Sciences & Engineering >>>>>>>>> (CEMSE) >>>>>>>>> Extreme Computing Research Center (ECRC) >>>>>>>>> King Abdullah University of Science and Technology (KAUST) >>>>>>>>> http://ecrc.kaust.edu.sa/ >>>>>>>>> >>>>>>>>> 4700 King Abdullah University of Science and Technology >>>>>>>>> al-Khawarizmi Bldg (Bldg 1), Office # 0109 >>>>>>>>> Thuwal 23955-6900, Kingdom of Saudi Arabia >>>>>>>>> http://www.kaust.edu.sa >>>>>>>>> >>>>>>>>> Office Phone: +966 12 808-0459 <+966%2012%20808%200459> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> *Austin Herrema* >>>>>>> PhD Student | Graduate Research Assistant | Iowa State University >>>>>>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> *Austin Herrema* >>>>> PhD Student | Graduate Research Assistant | Iowa State University >>>>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>>>> >>>> >>>> >>> >>> >>> -- >>> *Austin Herrema* >>> PhD Student | Graduate Research Assistant | Iowa State University >>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>> >> >> >> >> -- >> *Austin Herrema* >> PhD Student | Graduate Research Assistant | Iowa State University >> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- *Austin Herrema* PhD Student | Graduate Research Assistant | Iowa State University Wind Energy Science, Engineering, and Policy | Mechanical Engineering -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetank at gmail.com Thu Mar 30 12:01:29 2017 From: gaetank at gmail.com (Gaetan Kenway) Date: Thu, 30 Mar 2017 10:01:29 -0700 Subject: [petsc-users] How to use f2py on a PETSc/SLEPc-based fortran code In-Reply-To: References: <6ED90790-6A81-4540-8BFF-57E6B9F9635D@dsic.upv.es> <93F249E6-656D-429A-96D2-E8831B406334@mcs.anl.gov> Message-ID: Great. Best of luck with your Ph.D. Gaetan On Thu, Mar 30, 2017 at 7:12 AM, Austin Herrema wrote: > Indeed it did seem to be an issue with the integer of value 500 in that > function call (8 byte/4 byte? Don't know...). Upon explicitly using a > PetscInt variable, everything works just fine. > > Thank you, everyone, for your patient help! > > Best, > Austin > > > > On Wed, Mar 29, 2017 at 6:48 PM, Matthew Knepley > wrote: > >> On Wed, Mar 29, 2017 at 5:12 PM, Austin Herrema >> wrote: >> >>> Quick update on this issue in case it brings any other thoughts/ideas to >>> light. For a very simple, small problem, I am successfully able to use >>> MatSeqAIJSetPreallocation in a fortran-based code compiled for python via >>> f2py. I am still unsure why, in a larger code, this particular function >>> call fails when the code is executed in python (on a setup that runs fine >>> under pure Fortran). Does the error " nnz cannot be greater than row >>> length: local row 2 value 1330400321 rowlength 37065" imply that the >>> program thinks I am trying to allocate 1330400321 nonzeros in a row of >>> max length 37065? >>> >> >> Yes. >> >> Thanks, >> >> Matt >> >> >>> That is obviously not my intent nor what I think I have coded. I am >>> trying to skip preallocation and use merely MatSetUp but, as we would >>> expect, the dynamic allocation is ridiculously slow... >>> >>> On Wed, Mar 29, 2017 at 11:27 AM, Austin Herrema >>> wrote: >>> >>>> Got it--just had to link against other compiled source, as you said. >>>> I've attached my makefile for doing everything (including variable >>>> definitions, compiling source, and running requisite f2py commands) in case >>>> that's helpful for anyone else trying to do something similar. But >>>> obviously the meat of it is in what Gaetan provided. >>>> >>>> I am now able to successfully run simple PETSc-based fortran codes in >>>> python. For a larger, more complex code, I am getting some PETSc errors >>>> when running in python that I don't normally get. In particular, >>>> preallocation is failing--the relevant fortran code block and PETSc error >>>> is below. >>>> >>>> >>>> call MatCreate(PETSC_COMM_WORLD, LHS_pc, pc_ier) >>>> call MatSetSizes(LHS_pc, PETSC_DECIDE, PETSC_DECIDE, NSD*FUN%NNODE, >>>> NSD*FUN%NNODE, pc_ier) >>>> call MatSetFromOptions(LHS_pc, pc_ier) >>>> call MatSeqAIJSetPreallocation(LHS_pc, 500, PETSC_NULL_INTEGER, >>>> pc_ier) >>>> >>>> >>>> [0]PETSC ERROR: --------------------- Error Message >>>> -------------------------------------------------------------- >>>> [0]PETSC ERROR: Argument out of range >>>> [0]PETSC ERROR: nnz cannot be greater than row length: local row 2 >>>> value 1330400321 rowlength 37065 >>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>>> for trouble shooting. >>>> [0]PETSC ERROR: Petsc Release Version 3.7.5, Jan, 01, 2017 >>>> [0]PETSC ERROR: Unknown Name on a real named >>>> austin-ethernet.student.iastate.edu by Austin Wed Mar 29 10:59:33 2017 >>>> [0]PETSC ERROR: Configure options CC=/usr/local/bin/mpicc >>>> CXX=/usr/local/bin/mpicxx F77=/usr/local/bin/mpif77 >>>> FC=/usr/local/bin/mpif90 --with-shared-libraries=1 --with-pthread=0 >>>> --with-openmp=0 --with-debugging=1 --with-ssl=0 >>>> --with-superlu_dist-include=/usr/local/opt/superlu_dist/include >>>> --with-superlu_dist-lib="-L/usr/local/opt/superlu_dist/lib >>>> -lsuperlu_dist" --with-fftw-dir=/usr/local/opt/fftw >>>> --with-netcdf-dir=/usr/local/opt/netcdf --with-suitesparse-dir=/usr/local/opt/suite-sparse >>>> --with-hdf5-dir=/usr/local/opt/hdf5 --with-metis-dir=/usr/local/opt/metis >>>> --with-parmetis-dir=/usr/local/opt/parmetis >>>> --with-scalapack-dir=/usr/local/opt/scalapack >>>> --with-mumps-dir=/usr/local/opt/mumps/libexec --with-x=0 >>>> --prefix=/usr/local/Cellar/petsc/3.7.5/real --with-scalar-type=real >>>> --with-hypre-dir=/usr/local/opt/hypre --with-sundials-dir=/usr/local/opt/sundials >>>> --with-hwloc-dir=/usr/local/opt/hwloc >>>> [0]PETSC ERROR: #1 MatSeqAIJSetPreallocation_SeqAIJ() line 3598 in >>>> /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/ >>>> impls/aij/seq/aij.c >>>> [0]PETSC ERROR: #2 MatSeqAIJSetPreallocation() line 3570 in >>>> /private/tmp/petsc-20170223-508-1xeniyc/petsc-3.7.5/src/mat/ >>>> impls/aij/seq/aij.c >>>> >>>> >>>> Is there anything about the MatSeqAIJSetPreallocation function that >>>> would make it not work correctly in Python even though everything else >>>> seems to work properly? If anyone has thoughts on this that would be great. >>>> But, again, I do realize I'm venturing into potentially unsupported >>>> territory. >>>> >>>> >>>> On Tue, Mar 28, 2017 at 4:53 PM, Gaetan Kenway >>>> wrote: >>>> >>>>> Looks like it isn't finding your source from run_analysis.f90. You >>>>> still need to compile that yourself and include in the final link. In my >>>>> example, all the "original" source code was precompiled into a library from >>>>> a different makefile and then this was run after-the-fact. >>>>> >>>>> Gaetan >>>>> >>>>> On Tue, Mar 28, 2017 at 2:38 PM, Austin Herrema >>>>> wrote: >>>>> >>>>>> Gotcha. In that case, it seems I should be good without that line. >>>>>> I've gotten the compile to succeed, but upon attempting to import the >>>>>> module I get the following: >>>>>> >>>>>> >>> import run_analysis_final >>>>>> Traceback (most recent call last): >>>>>> File "", line 1, in >>>>>> ImportError: dlopen(./run_analysis_final.so, 2): Symbol not found: >>>>>> _run_analysis_ >>>>>> Referenced from: ./run_analysis_final.so >>>>>> Expected in: flat namespace >>>>>> in ./run_analysis_final.so >>>>>> >>>>>> Seems I may have gotten the linking wrong somehow. Will keep >>>>>> searching, but the simplified makefile that I used is attached in case >>>>>> anyone thinks they might be able to spot the issue in it. That said, I do >>>>>> realize that this may be starting to reach beyond the scope of this mailing >>>>>> list so feel free to ignore... >>>>>> >>>>>> On Tue, Mar 28, 2017 at 2:31 PM, Gaetan Kenway >>>>>> wrote: >>>>>> >>>>>>> You only get that file if you have wrapped a module explicitly in >>>>>>> the .pyf file. If you haven't wrapped a module, that doesn't get created. >>>>>>> >>>>>>> Gaetan >>>>>>> >>>>>>> On Tue, Mar 28, 2017 at 12:28 PM, Austin Herrema < >>>>>>> aherrema at iastate.edu> wrote: >>>>>>> >>>>>>>> Gaetan, >>>>>>>> >>>>>>>> Thank you for this. With your help, I think I am getting close to >>>>>>>> getting this to work for my case. At the moment, I am hung up on the line >>>>>>>> of your makefile which reads "$(FF90) $(FF90_ALL_FLAGS) -I$(MAIN_DIR)/mod >>>>>>>> -c warpustruct-f2pywrappers2.f90". Am I correct that >>>>>>>> warpustruct-f2pywrappers2.f90 should be generated by f2py? If so, do you >>>>>>>> (or does anyone else) know the command for telling f2py to do so? At the >>>>>>>> moment I am using: >>>>>>>> >>>>>>>> f2py run_analysis.f90 -m run_analysis -h run_analysis.pyf >>>>>>>> >>>>>>>> to get the requisite .pyf and .c files, but no .f90 file. If I am >>>>>>>> wrong about the origin of this file, please do tell me! >>>>>>>> >>>>>>>> Thank you, >>>>>>>> Austin >>>>>>>> >>>>>>>> On Mon, Mar 27, 2017 at 5:13 PM, Gaetan Kenway >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Austin >>>>>>>>> >>>>>>>>> Here is the full makefile for a code we use. The variables defined >>>>>>>>> externally in a separate config file are: >>>>>>>>> $(FF90) >>>>>>>>> $(FF90_FLAGS) >>>>>>>>> $(LIBDIR) >>>>>>>>> $(PETSC_LINKER_FLAGS) >>>>>>>>> $(LINKER_FLAGS) >>>>>>>>> $(CGNS_LINKER_FLAGS) >>>>>>>>> >>>>>>>>> $(PYTHON) >>>>>>>>> $(PYTHON-CONIFG) >>>>>>>>> $(F2PY) >>>>>>>>> (These are usually use python, python-config and f2py. You can >>>>>>>>> overwrite as necessary) >>>>>>>>> >>>>>>>>> $(CC) >>>>>>>>> $(CC_ALL_FLAGS) >>>>>>>>> >>>>>>>>> This essentially just mimics what f2py does automatically but we >>>>>>>>> found it easier to control exactly what is going on. Essentially you are >>>>>>>>> just compiling exactly as you normally an executable, but instead make a >>>>>>>>> .so (with the -shared option) and including the additional .o files >>>>>>>>> generated by compiling the f2py-generated wrappers. >>>>>>>>> >>>>>>>>> Hope this helps, >>>>>>>>> Gaetan >>>>>>>>> >>>>>>>>> On Sat, Mar 25, 2017 at 5:38 AM, Lisandro Dalcin < >>>>>>>>> dalcinl at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 22 March 2017 at 20:29, Barry Smith >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Lisandro, >>>>>>>>>>> >>>>>>>>>>> We've had a couple questions similar to this with f2py; is >>>>>>>>>>> there a way we could add to the PETSc/SLEPc makefile rules something to >>>>>>>>>>> allow people to trivially use f2py without having to make their own (often >>>>>>>>>>> incorrect) manual command lines? >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> Barry, it is quite hard and hacky to get f2py working in the >>>>>>>>>> general case. I think the email from Gaetan in this thread proves my point. >>>>>>>>>> >>>>>>>>>> IMHO, it is easier to write a small Fortran source exposing the >>>>>>>>>> API to call using ISO_C_BINDINGS, then wrap that code with the more >>>>>>>>>> traditional C-based "static" tools (SWIG, Cython) or even "dynamically" >>>>>>>>>> with ctypes or cffi (which use dlopen'ing). >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Lisandro Dalcin >>>>>>>>>> ============ >>>>>>>>>> Research Scientist >>>>>>>>>> Computer, Electrical and Mathematical Sciences & Engineering >>>>>>>>>> (CEMSE) >>>>>>>>>> Extreme Computing Research Center (ECRC) >>>>>>>>>> King Abdullah University of Science and Technology (KAUST) >>>>>>>>>> http://ecrc.kaust.edu.sa/ >>>>>>>>>> >>>>>>>>>> 4700 King Abdullah University of Science and Technology >>>>>>>>>> al-Khawarizmi Bldg (Bldg 1), Office # 0109 >>>>>>>>>> Thuwal 23955-6900, Kingdom of Saudi Arabia >>>>>>>>>> http://www.kaust.edu.sa >>>>>>>>>> >>>>>>>>>> Office Phone: +966 12 808-0459 <+966%2012%20808%200459> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> *Austin Herrema* >>>>>>>> PhD Student | Graduate Research Assistant | Iowa State University >>>>>>>> Wind Energy Science, Engineering, and Policy | Mechanical >>>>>>>> Engineering >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> *Austin Herrema* >>>>>> PhD Student | Graduate Research Assistant | Iowa State University >>>>>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> *Austin Herrema* >>>> PhD Student | Graduate Research Assistant | Iowa State University >>>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>>> >>> >>> >>> >>> -- >>> *Austin Herrema* >>> PhD Student | Graduate Research Assistant | Iowa State University >>> Wind Energy Science, Engineering, and Policy | Mechanical Engineering >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > *Austin Herrema* > PhD Student | Graduate Research Assistant | Iowa State University > Wind Energy Science, Engineering, and Policy | Mechanical Engineering > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jychang48 at gmail.com Thu Mar 30 14:17:58 2017 From: jychang48 at gmail.com (Justin Chang) Date: Thu, 30 Mar 2017 14:17:58 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels Message-ID: Hi all, Just a general conceptual question: say I am tinkering around with SNES ex48.c and am running the program with these options: mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS I am not too familiar with mg, but it seems to me there is a very strong correlation between $MGLEVELS and $DAREFINE as well as perhaps even the initial coarse grid size (provided by $X/YZSEED). Is there a rule of thumb on how these parameters should be? I am guessing it probably is also hardware/architectural dependent? Thanks, Justin -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 30 14:23:04 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Mar 2017 14:23:04 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: On Thu, Mar 30, 2017 at 2:17 PM, Justin Chang wrote: > Hi all, > > Just a general conceptual question: say I am tinkering around with SNES > ex48.c and am running the program with these options: > > mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED -thi_mat_type > baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS > > I am not too familiar with mg, but it seems to me there is a very strong > correlation between $MGLEVELS and $DAREFINE as well as perhaps even the > initial coarse grid size (provided by $X/YZSEED). > > Is there a rule of thumb on how these parameters should be? I am guessing > it probably is also hardware/architectural dependent? > You cannot refine further than 1 grid point in any direction. Moreover, DMDA has some problems with MG on grids without an odd number of vertices I think. Matt > Thanks, > Justin > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 30 14:25:27 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Mar 2017 14:25:27 -0500 Subject: [petsc-users] Configure nested PCFIELDSPLIT with general index sets In-Reply-To: References: <6496846F-19F8-4494-87E1-DDC390513370@imperial.ac.uk> Message-ID: On Wed, Mar 22, 2017 at 1:45 PM, Natacha BEREUX wrote: > Hello Matt, > Thanks a lot for your answers. > Since I am working on a large FEM Fortran code, I have to stick to > Fortran. > Do you know if someone plans to add this Fortran interface? Or may be I > could do it myself ? Is this particular interface very hard to add ? > Perhaps could I mimic some other interface ? > What would you advise ? > I have added the interface in branch knepley/feature-fortran-compose. I also put this in the 'next' branch. It should make it to master soon. There is a test in sys/examples/tests/ex13f Thanks, Matt > Best regards, > Natacha > > On Wed, Mar 22, 2017 at 12:33 PM, Matthew Knepley > wrote: > >> On Wed, Mar 22, 2017 at 10:03 AM, Natacha BEREUX < >> natacha.bereux at gmail.com> wrote: >> >>> Hello, >>> if my understanding is correct, the approach proposed by Matt and >>> Lawrence is the following : >>> - create a DMShell (DMShellCreate) >>> - define my own CreateFieldDecomposition to return the index sets I need >>> (for displacement, pressure and temperature degrees of freedom) : >>> myCreateFieldDecomposition(... ) >>> - set it in the DMShell ( DMShellSetCreateFieldDecomposition) >>> - then sets the DM in KSP context (KSPSetDM) >>> >>> I have some more questions >>> - I did not succeed in setting my own CreateFieldDecomposition in the >>> DMShell : link fails with " unknown reference to ? >>> dmshellsetcreatefielddecomposition_ ?. Could it be a Fortran problem (I >>> am using Fortran)? Is this routine available in PETSc Fortran interface ? >>> \ >>> >> >> Yes, exactly. The Fortran interface for passing function pointers is >> complex, and no one has added this function yet. >> >> >>> - CreateFieldDecomposition is supposed to return an array of dms (to >>> define the fields). I am not able to return such datas. Do I return a >>> PETSC_NULL_OBJECT instead ? >>> >> >> Yes. >> >> >>> - do I have to provide something else to define the DMShell ? >>> >> >> I think you will have to return local and global vectors, but this just >> means creating a vector of the correct size and distribution. >> >> Thanks, >> >> Matt >> >> >>> Thanks a lot for your help >>> Natacha >>> >>> On Tue, Mar 21, 2017 at 2:44 PM, Natacha BEREUX < >>> natacha.bereux at gmail.com> wrote: >>> >>>> Thanks for your quick answers. To be honest, I am not familiar at all >>>> with DMShells and DMPlexes. But since it is what I need, I am going to try >>>> it. >>>> Thanks again for your advices, >>>> Natacha >>>> >>>> On Tue, Mar 21, 2017 at 2:27 PM, Lawrence Mitchell < >>>> lawrence.mitchell at imperial.ac.uk> wrote: >>>> >>>>> >>>>> > On 21 Mar 2017, at 13:24, Matthew Knepley wrote: >>>>> > >>>>> > I think the remedy is as easy as specifying a DMShell that has a >>>>> PetscSection (DMSetDefaultSection) with your ordering, and >>>>> > I think this is how Firedrake (http://www.firedrakeproject.org/) >>>>> does it. >>>>> >>>>> We actually don't use a section, but we do provide >>>>> DMCreateFieldDecomposition_Shell. >>>>> >>>>> If you have a section that describes all the fields, then I think if >>>>> the DMShell knows about it, you effectively get the same behaviour as >>>>> DMPlex (which does the decomposition in the same manner?). >>>>> >>>>> > However, I usually use a DMPlex which knows about my >>>>> > mesh, so I am not sure if this strategy has any holes. >>>>> >>>>> I haven't noticed anything yet. >>>>> >>>>> Lawrence >>>> >>>> >>>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Mar 30 14:35:22 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 30 Mar 2017 14:35:22 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: -da_refine $DAREFINE determines how large the final problem will be. By default if you don't supply pc_mg_levels then it uses $DAREFINE + 1 as the number of levels of MG to use; for example -da_refine 1 would result in 2 levels of multigrid. > On Mar 30, 2017, at 2:17 PM, Justin Chang wrote: > > Hi all, > > Just a general conceptual question: say I am tinkering around with SNES ex48.c and am running the program with these options: > > mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS > > I am not too familiar with mg, but it seems to me there is a very strong correlation between $MGLEVELS and $DAREFINE as well as perhaps even the initial coarse grid size (provided by $X/YZSEED). > > Is there a rule of thumb on how these parameters should be? I am guessing it probably is also hardware/architectural dependent? > > Thanks, > Justin From gideon.simpson at gmail.com Thu Mar 30 15:02:09 2017 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Thu, 30 Mar 2017 16:02:09 -0400 Subject: [petsc-users] understanding snes_view output Message-ID: When running something with -snes_monitor and -snes_view, I see two sets of numbers that I'm trying to understand (see below). The first is the sequence X SNES Function norm, with X going from 0 to 3. I had interpreted this as saying that it takes 3 steps of Newton, though perhaps this is not the case. The next is "total number of linear solves=4" and "total number of function evaluations=31". How do these numbers relegate to the SNES Function norm statements? Also, I was surprised by the number of function evaluations given that I specify a SNESSetJacobian in the problem. 0 SNES Function norm 7.630295941712e-03 1 SNES Function norm 3.340185037212e-06 2 SNES Function norm 1.310176068229e-13 3 SNES Function norm 1.464821375527e-14 SNES Object: 4 MPI processes type: newtonls maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-15, absolute=1e-50, solution=1e-15 total number of linear solver iterations=4 total number of function evaluations=31 norm schedule ALWAYS SNESLineSearch Object: 4 MPI processes type: bt interpolation: cubic alpha=1.000000e-04 maxstep=1.000000e+08, minlambda=1.000000e-12 tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 maximum iterations=40 KSP Object: 4 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 4 MPI processes type: bjacobi block Jacobi: number of blocks = 4 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (sub_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=2, cols=2 package used to perform factorization: petsc total: nonzeros=4, allocated nonzeros=4 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=2, cols=2 total: nonzeros=4, allocated nonzeros=10 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 1 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 4 MPI processes type: mpiaij rows=2, cols=2 total: nonzeros=4, allocated nonzeros=20 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 1 nodes, limit used is 5 -gideon From jychang48 at gmail.com Thu Mar 30 15:04:04 2017 From: jychang48 at gmail.com (Justin Chang) Date: Thu, 30 Mar 2017 15:04:04 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: Yeah based on my experiments it seems setting pc_mg_levels to $DAREFINE + 1 has decent performance. 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In some of the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/ petsc/documentation/tutorials/TutorialCEMRACS2016.pdf on slide 203/227) they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran this, it was almost twice as slow as if $MGLEVELS >= $DAREFINE 2) So I understand that one cannot refine further than one grid point in each direction, but is there any benefit to having $MGLEVELS > $DAREFINE by a lot? Thanks, Justin On Thu, Mar 30, 2017 at 2:35 PM, Barry Smith wrote: > > -da_refine $DAREFINE determines how large the final problem will be. > > By default if you don't supply pc_mg_levels then it uses $DAREFINE + 1 > as the number of levels of MG to use; for example -da_refine 1 would result > in 2 levels of multigrid. > > > > On Mar 30, 2017, at 2:17 PM, Justin Chang wrote: > > > > Hi all, > > > > Just a general conceptual question: say I am tinkering around with SNES > ex48.c and am running the program with these options: > > > > mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED > -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS > > > > I am not too familiar with mg, but it seems to me there is a very strong > correlation between $MGLEVELS and $DAREFINE as well as perhaps even the > initial coarse grid size (provided by $X/YZSEED). > > > > Is there a rule of thumb on how these parameters should be? I am > guessing it probably is also hardware/architectural dependent? > > > > Thanks, > > Justin > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 30 15:37:04 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Mar 2017 15:37:04 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: On Thu, Mar 30, 2017 at 3:04 PM, Justin Chang wrote: > Yeah based on my experiments it seems setting pc_mg_levels to $DAREFINE + > 1 has decent performance. > > 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In some > of the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/ > petsc/documentation/tutorials/TutorialCEMRACS2016.pdf on slide 203/227) > they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran this, it > was almost twice as slow as if $MGLEVELS >= $DAREFINE > Depending on how big the initial grid is, you may want this. There is a balance between coarse grid and fine grid work. > 2) So I understand that one cannot refine further than one grid point in > each direction, but is there any benefit to having $MGLEVELS > $DAREFINE by > a lot? > Again, it depends on the size of the initial grid. On really large problems, you want to use GAMG as the coarse solver, which will move the problem onto a smaller number of nodes so that you can coarsen further. Matt > Thanks, > Justin > > On Thu, Mar 30, 2017 at 2:35 PM, Barry Smith wrote: > >> >> -da_refine $DAREFINE determines how large the final problem will be. >> >> By default if you don't supply pc_mg_levels then it uses $DAREFINE + 1 >> as the number of levels of MG to use; for example -da_refine 1 would result >> in 2 levels of multigrid. >> >> >> > On Mar 30, 2017, at 2:17 PM, Justin Chang wrote: >> > >> > Hi all, >> > >> > Just a general conceptual question: say I am tinkering around with SNES >> ex48.c and am running the program with these options: >> > >> > mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED >> -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS >> > >> > I am not too familiar with mg, but it seems to me there is a very >> strong correlation between $MGLEVELS and $DAREFINE as well as perhaps even >> the initial coarse grid size (provided by $X/YZSEED). >> > >> > Is there a rule of thumb on how these parameters should be? I am >> guessing it probably is also hardware/architectural dependent? >> > >> > Thanks, >> > Justin >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jychang48 at gmail.com Thu Mar 30 15:38:29 2017 From: jychang48 at gmail.com (Justin Chang) Date: Thu, 30 Mar 2017 15:38:29 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: Okay, got it. What are the options for setting GAMG as the coarse solver? On Thu, Mar 30, 2017 at 3:37 PM, Matthew Knepley wrote: > On Thu, Mar 30, 2017 at 3:04 PM, Justin Chang wrote: > >> Yeah based on my experiments it seems setting pc_mg_levels to $DAREFINE + >> 1 has decent performance. >> >> 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In some >> of the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/ >> petsc/documentation/tutorials/TutorialCEMRACS2016.pdf on slide 203/227) >> they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran this, it >> was almost twice as slow as if $MGLEVELS >= $DAREFINE >> > > Depending on how big the initial grid is, you may want this. There is a > balance between coarse grid and fine grid work. > > >> 2) So I understand that one cannot refine further than one grid point in >> each direction, but is there any benefit to having $MGLEVELS > $DAREFINE by >> a lot? >> > > Again, it depends on the size of the initial grid. > > On really large problems, you want to use GAMG as the coarse solver, which > will move the problem onto a smaller number of nodes > so that you can coarsen further. > > Matt > > >> Thanks, >> Justin >> >> On Thu, Mar 30, 2017 at 2:35 PM, Barry Smith wrote: >> >>> >>> -da_refine $DAREFINE determines how large the final problem will be. >>> >>> By default if you don't supply pc_mg_levels then it uses $DAREFINE + 1 >>> as the number of levels of MG to use; for example -da_refine 1 would result >>> in 2 levels of multigrid. >>> >>> >>> > On Mar 30, 2017, at 2:17 PM, Justin Chang wrote: >>> > >>> > Hi all, >>> > >>> > Just a general conceptual question: say I am tinkering around with >>> SNES ex48.c and am running the program with these options: >>> > >>> > mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED >>> -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS >>> > >>> > I am not too familiar with mg, but it seems to me there is a very >>> strong correlation between $MGLEVELS and $DAREFINE as well as perhaps even >>> the initial coarse grid size (provided by $X/YZSEED). >>> > >>> > Is there a rule of thumb on how these parameters should be? I am >>> guessing it probably is also hardware/architectural dependent? >>> > >>> > Thanks, >>> > Justin >>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 30 15:39:15 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Mar 2017 15:39:15 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: On Thu, Mar 30, 2017 at 3:38 PM, Justin Chang wrote: > Okay, got it. What are the options for setting GAMG as the coarse solver? > -mg_coarse_pc_type gamg I think > On Thu, Mar 30, 2017 at 3:37 PM, Matthew Knepley > wrote: > >> On Thu, Mar 30, 2017 at 3:04 PM, Justin Chang >> wrote: >> >>> Yeah based on my experiments it seems setting pc_mg_levels to $DAREFINE >>> + 1 has decent performance. >>> >>> 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In some >>> of the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/ >>> petsc/documentation/tutorials/TutorialCEMRACS2016.pdf on slide 203/227) >>> they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran this, it >>> was almost twice as slow as if $MGLEVELS >= $DAREFINE >>> >> >> Depending on how big the initial grid is, you may want this. There is a >> balance between coarse grid and fine grid work. >> >> >>> 2) So I understand that one cannot refine further than one grid point in >>> each direction, but is there any benefit to having $MGLEVELS > $DAREFINE by >>> a lot? >>> >> >> Again, it depends on the size of the initial grid. >> >> On really large problems, you want to use GAMG as the coarse solver, >> which will move the problem onto a smaller number of nodes >> so that you can coarsen further. >> >> Matt >> >> >>> Thanks, >>> Justin >>> >>> On Thu, Mar 30, 2017 at 2:35 PM, Barry Smith wrote: >>> >>>> >>>> -da_refine $DAREFINE determines how large the final problem will be. >>>> >>>> By default if you don't supply pc_mg_levels then it uses $DAREFINE + >>>> 1 as the number of levels of MG to use; for example -da_refine 1 would >>>> result in 2 levels of multigrid. >>>> >>>> >>>> > On Mar 30, 2017, at 2:17 PM, Justin Chang >>>> wrote: >>>> > >>>> > Hi all, >>>> > >>>> > Just a general conceptual question: say I am tinkering around with >>>> SNES ex48.c and am running the program with these options: >>>> > >>>> > mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED >>>> -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS >>>> > >>>> > I am not too familiar with mg, but it seems to me there is a very >>>> strong correlation between $MGLEVELS and $DAREFINE as well as perhaps even >>>> the initial coarse grid size (provided by $X/YZSEED). >>>> > >>>> > Is there a rule of thumb on how these parameters should be? I am >>>> guessing it probably is also hardware/architectural dependent? >>>> > >>>> > Thanks, >>>> > Justin >>>> >>>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jychang48 at gmail.com Thu Mar 30 16:15:07 2017 From: jychang48 at gmail.com (Justin Chang) Date: Thu, 30 Mar 2017 16:15:07 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: Okay I'll give it a shot. Somewhat unrelated, but I tried running this on Cori's Haswell node (loaded the module 'petsc/3.7.4-64'). But I get these errors: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Argument out of range [0]PETSC ERROR: Partition in y direction is too fine! 0 1 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.4, Oct, 02, 2016 [0]PETSC ERROR: /global/u1/j/jychang/Icesheet/./ex48 on a arch-cori-opt64-INTEL-3.7.4-64 named nid00020 by jychang Thu Mar 30 14:04:35 2017 [0]PETSC ERROR: Configure options --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-size_t=8 --known-mpi-int64_t=1 --known-has-attribute-aligned=1 --prefix=/global/common/cori/software/petsc/3.7.4-64/hsw/intel PETSC_ARCH=arch-cori-opt64-INTEL-3.7.4-64 --COPTFLAGS="-mkl -O2 -no-ipo -g -axMIC-AVX512,CORE-AVX2,AVX" --CXXOPTFLAGS="-mkl -O2 -no-ipo -g -axMIC-AVX512,CORE-AVX2,AVX" --FOPTFLAGS="-mkl -O2 -no-ipo -g -axMIC-AVX512,CORE-AVX2,AVX" --with-hdf5-dir=/opt/cray/pe/hdf5-parallel/1.8.16/INTEL/15.0 --with-hwloc-dir=/global/common/cori/software/hwloc/1.11.4/hsw --with-scalapack-include=/opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/include --with-scalapack-lib= --LIBS="-mkl -L/global/common/cori/software/petsc/3.7.4-64/hsw/intel/lib -I/global/common/cori/software/petsc/3.7.4-64/hsw/intel/include -L/global/common/cori/software/xz/5.2.2/hsw/lib -I/global/common/cori/software/xz/5.2.2/hsw/include -L/global/common/cori/software/zlib/1.2.8/hsw/intel/lib -I/global/common/cori/software/zlib/1.2.8/hsw/intel/include -L/global/common/cori/software/libxml2/2.9.4/hsw/lib -I/global/common/cori/software/libxml2/2.9.4/hsw/include -L/global/common/cori/software/numactl/2.0.11/hsw/lib -I/global/common/cori/software/numactl/2.0.11/hsw/include -L/global/common/cori/software/hwloc/1.11.4/hsw/lib -I/global/common/cori/software/hwloc/1.11.4/hsw/include -L/global/common/cori/software/openssl/1.1.0a/hsw/lib -I/global/common/cori/software/openssl/1.1.0a/hsw/include -L/global/common/cori/software/subversion/1.9.4/hsw/lib -I/global/common/cori/software/subversion/1.9.4/hsw/include -lhwloc -lpciaccess -lxml2 -lz -llzma -Wl,--start-group /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_scalapack_lp64.a /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_core.a /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_intel_thread.a /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.a -Wl,--end-group -lstdc++" --download-parmetis --download-metis --with-ssl=0 --with-batch --known-mpi-shared-libraries=0 --with-clib-autodetect=0 --with-cxxlib-autodetect=0 --with-debugging=0 --with-fortranlib-autodetect=0 --with-mpiexec=srun --with-shared-libraries=0 --with-x=0 --known-mpi-int64-t=0 --known-bits-per-byte=8 --known-sdot-returns-double=0 --known-snrm2-returns-double=0 --known-level1-dcache-assoc=0 --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 --known-memcmp-ok=1 --known-mpi-c-double-complex=1 --known-mpi-long-double=1 --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 --known-sizeof-char=1 --known-sizeof-double=8 CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn F77=ftn F90=ftn MPIF90=ftn MPIF77=ftn CFLAGS=-axMIC-AVX512,CORE-AVX2,AVX CXXFLAGS=-axMIC-AVX512,CORE-AVX2,AVX FFLAGS=-axMIC-AVX512,CORE-AVX2,AVX CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn F77=ftn F90=ftn MPIF90=ftn MPIF77=ftn CFLAGS=-fPIC FFLAGS=-fPIC LDFLAGS=-fPIE --download-hypre --with-64-bit-indices [0]PETSC ERROR: #1 DMSetUp_DA_3D() line 298 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/da3.c [0]PETSC ERROR: #2 DMSetUp_DA() line 27 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/dareg.c [0]PETSC ERROR: #3 DMSetUp() line 744 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/interface/dm.c [0]PETSC ERROR: #4 DMCoarsen_DA() line 1196 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/da.c [0]PETSC ERROR: #5 DMCoarsen() line 2371 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/interface/dm.c [0]PETSC ERROR: #6 PCSetUp_MG() line 616 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/ksp/pc/impls/mg/mg.c [0]PETSC ERROR: #7 PCSetUp() line 968 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #8 KSPSetUp() line 390 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #9 KSPSolve() line 599 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #10 SNESSolve_NEWTONLS() line 230 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/snes/impls/ls/ls.c [0]PETSC ERROR: #11 SNESSolve() line 4005 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/snes/interface/snes.c [0]PETSC ERROR: #12 main() line 1548 in /global/homes/j/jychang/Icesheet/ex48.c [0]PETSC ERROR: PETSc Option Table entries: [0]PETSC ERROR: -da_refine 4 [0]PETSC ERROR: -ksp_rtol 1e-7 [0]PETSC ERROR: -M 5 [0]PETSC ERROR: -N 5 [0]PETSC ERROR: -P 3 [0]PETSC ERROR: -pc_mg_levels 5 [0]PETSC ERROR: -pc_type mg [0]PETSC ERROR: -thi_mat_type baij [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- Rank 0 [Thu Mar 30 14:04:35 2017] [c0-0c0s5n0] application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 srun: error: nid00020: task 0: Aborted srun: Terminating job step 4363145.1z it seems to me the PETSc from this module is not registering the '-da_refine' entry. This is strange because I have no issue with this on the latest petsc-dev version, anyone know about this error and/or why it happens? On Thu, Mar 30, 2017 at 3:39 PM, Matthew Knepley wrote: > On Thu, Mar 30, 2017 at 3:38 PM, Justin Chang wrote: > >> Okay, got it. What are the options for setting GAMG as the coarse solver? >> > > -mg_coarse_pc_type gamg I think > > >> On Thu, Mar 30, 2017 at 3:37 PM, Matthew Knepley >> wrote: >> >>> On Thu, Mar 30, 2017 at 3:04 PM, Justin Chang >>> wrote: >>> >>>> Yeah based on my experiments it seems setting pc_mg_levels to $DAREFINE >>>> + 1 has decent performance. >>>> >>>> 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In >>>> some of the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/ >>>> petsc/documentation/tutorials/TutorialCEMRACS2016.pdf on slide >>>> 203/227) they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran >>>> this, it was almost twice as slow as if $MGLEVELS >= $DAREFINE >>>> >>> >>> Depending on how big the initial grid is, you may want this. There is a >>> balance between coarse grid and fine grid work. >>> >>> >>>> 2) So I understand that one cannot refine further than one grid point >>>> in each direction, but is there any benefit to having $MGLEVELS > $DAREFINE >>>> by a lot? >>>> >>> >>> Again, it depends on the size of the initial grid. >>> >>> On really large problems, you want to use GAMG as the coarse solver, >>> which will move the problem onto a smaller number of nodes >>> so that you can coarsen further. >>> >>> Matt >>> >>> >>>> Thanks, >>>> Justin >>>> >>>> On Thu, Mar 30, 2017 at 2:35 PM, Barry Smith >>>> wrote: >>>> >>>>> >>>>> -da_refine $DAREFINE determines how large the final problem will be. >>>>> >>>>> By default if you don't supply pc_mg_levels then it uses $DAREFINE + >>>>> 1 as the number of levels of MG to use; for example -da_refine 1 would >>>>> result in 2 levels of multigrid. >>>>> >>>>> >>>>> > On Mar 30, 2017, at 2:17 PM, Justin Chang >>>>> wrote: >>>>> > >>>>> > Hi all, >>>>> > >>>>> > Just a general conceptual question: say I am tinkering around with >>>>> SNES ex48.c and am running the program with these options: >>>>> > >>>>> > mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED >>>>> -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS >>>>> > >>>>> > I am not too familiar with mg, but it seems to me there is a very >>>>> strong correlation between $MGLEVELS and $DAREFINE as well as perhaps even >>>>> the initial coarse grid size (provided by $X/YZSEED). >>>>> > >>>>> > Is there a rule of thumb on how these parameters should be? I am >>>>> guessing it probably is also hardware/architectural dependent? >>>>> > >>>>> > Thanks, >>>>> > Justin >>>>> >>>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jychang48 at gmail.com Thu Mar 30 16:17:56 2017 From: jychang48 at gmail.com (Justin Chang) Date: Thu, 30 Mar 2017 16:17:56 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: Also, this was the output before the error message: Level 0 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 5 x 5 x 3 (75), size (m) 2000. x 2000. x 500. Level -1 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 2 x 2 x 2 (8), size (m) 5000. x 5000. x 1000. Level -2 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 1 x 1 x 1 (1), size (m) 10000. x 10000. x inf. Which tells me '-da_refine 4' is not registering On Thu, Mar 30, 2017 at 4:15 PM, Justin Chang wrote: > Okay I'll give it a shot. > > Somewhat unrelated, but I tried running this on Cori's Haswell node > (loaded the module 'petsc/3.7.4-64'). But I get these errors: > > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Argument out of range > [0]PETSC ERROR: Partition in y direction is too fine! 0 1 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.4, Oct, 02, 2016 > [0]PETSC ERROR: /global/u1/j/jychang/Icesheet/./ex48 on a > arch-cori-opt64-INTEL-3.7.4-64 named nid00020 by jychang Thu Mar 30 > 14:04:35 2017 > [0]PETSC ERROR: Configure options --known-sizeof-void-p=8 > --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 > --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-size_t=8 > --known-mpi-int64_t=1 --known-has-attribute-aligned=1 > --prefix=/global/common/cori/software/petsc/3.7.4-64/hsw/intel > PETSC_ARCH=arch-cori-opt64-INTEL-3.7.4-64 --COPTFLAGS="-mkl -O2 -no-ipo > -g -axMIC-AVX512,CORE-AVX2,AVX" --CXXOPTFLAGS="-mkl -O2 -no-ipo -g > -axMIC-AVX512,CORE-AVX2,AVX" --FOPTFLAGS="-mkl -O2 -no-ipo -g > -axMIC-AVX512,CORE-AVX2,AVX" --with-hdf5-dir=/opt/cray/pe/ > hdf5-parallel/1.8.16/INTEL/15.0 --with-hwloc-dir=/global/ > common/cori/software/hwloc/1.11.4/hsw --with-scalapack-include=/opt/ > intel/compilers_and_libraries_2017.1.132/linux/mkl/include > --with-scalapack-lib= --LIBS="-mkl -L/global/common/cori/ > software/petsc/3.7.4-64/hsw/intel/lib -I/global/common/cori/ > software/petsc/3.7.4-64/hsw/intel/include -L/global/common/cori/software/xz/5.2.2/hsw/lib > -I/global/common/cori/software/xz/5.2.2/hsw/include -L/global/common/cori/ > software/zlib/1.2.8/hsw/intel/lib -I/global/common/cori/ > software/zlib/1.2.8/hsw/intel/include -L/global/common/cori/software/libxml2/2.9.4/hsw/lib > -I/global/common/cori/software/libxml2/2.9.4/hsw/include > -L/global/common/cori/software/numactl/2.0.11/hsw/lib > -I/global/common/cori/software/numactl/2.0.11/hsw/include > -L/global/common/cori/software/hwloc/1.11.4/hsw/lib -I/global/common/cori/ > software/hwloc/1.11.4/hsw/include -L/global/common/cori/ > software/openssl/1.1.0a/hsw/lib -I/global/common/cori/ > software/openssl/1.1.0a/hsw/include -L/global/common/cori/ > software/subversion/1.9.4/hsw/lib -I/global/common/cori/ > software/subversion/1.9.4/hsw/include -lhwloc -lpciaccess -lxml2 -lz > -llzma -Wl,--start-group /opt/intel/compilers_and_ > libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_scalapack_lp64.a > /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_core.a > /opt/intel/compilers_and_libraries_2017.1.132/linux/ > mkl/lib/intel64/libmkl_intel_thread.a /opt/intel/compilers_and_ > libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.a > -Wl,--end-group -lstdc++" --download-parmetis --download-metis --with-ssl=0 > --with-batch --known-mpi-shared-libraries=0 --with-clib-autodetect=0 > --with-cxxlib-autodetect=0 --with-debugging=0 > --with-fortranlib-autodetect=0 --with-mpiexec=srun > --with-shared-libraries=0 --with-x=0 --known-mpi-int64-t=0 > --known-bits-per-byte=8 --known-sdot-returns-double=0 > --known-snrm2-returns-double=0 --known-level1-dcache-assoc=0 > --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 > --known-memcmp-ok=1 --known-mpi-c-double-complex=1 > --known-mpi-long-double=1 --known-sizeof-MPI_Comm=4 > --known-sizeof-MPI_Fint=4 --known-sizeof-char=1 --known-sizeof-double=8 > CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn F77=ftn F90=ftn MPIF90=ftn > MPIF77=ftn CFLAGS=-axMIC-AVX512,CORE-AVX2,AVX CXXFLAGS=-axMIC-AVX512,CORE-AVX2,AVX > FFLAGS=-axMIC-AVX512,CORE-AVX2,AVX CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn > F77=ftn F90=ftn MPIF90=ftn MPIF77=ftn CFLAGS=-fPIC FFLAGS=-fPIC > LDFLAGS=-fPIE --download-hypre --with-64-bit-indices > [0]PETSC ERROR: #1 DMSetUp_DA_3D() line 298 in > /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/da3.c > [0]PETSC ERROR: #2 DMSetUp_DA() line 27 in /global/cscratch1/sd/swowner/ > sleak/petsc-3.7.4/src/dm/impls/da/dareg.c > [0]PETSC ERROR: #3 DMSetUp() line 744 in /global/cscratch1/sd/swowner/ > sleak/petsc-3.7.4/src/dm/interface/dm.c > [0]PETSC ERROR: #4 DMCoarsen_DA() line 1196 in > /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/da.c > [0]PETSC ERROR: #5 DMCoarsen() line 2371 in /global/cscratch1/sd/swowner/ > sleak/petsc-3.7.4/src/dm/interface/dm.c > [0]PETSC ERROR: #6 PCSetUp_MG() line 616 in /global/cscratch1/sd/swowner/ > sleak/petsc-3.7.4/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: #7 PCSetUp() line 968 in /global/cscratch1/sd/swowner/ > sleak/petsc-3.7.4/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #8 KSPSetUp() line 390 in /global/cscratch1/sd/swowner/ > sleak/petsc-3.7.4/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #9 KSPSolve() line 599 in /global/cscratch1/sd/swowner/ > sleak/petsc-3.7.4/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #10 SNESSolve_NEWTONLS() line 230 in > /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #11 SNESSolve() line 4005 in /global/cscratch1/sd/swowner/ > sleak/petsc-3.7.4/src/snes/interface/snes.c > [0]PETSC ERROR: #12 main() line 1548 in /global/homes/j/jychang/ > Icesheet/ex48.c > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -da_refine 4 > [0]PETSC ERROR: -ksp_rtol 1e-7 > [0]PETSC ERROR: -M 5 > [0]PETSC ERROR: -N 5 > [0]PETSC ERROR: -P 3 > [0]PETSC ERROR: -pc_mg_levels 5 > [0]PETSC ERROR: -pc_type mg > [0]PETSC ERROR: -thi_mat_type baij > [0]PETSC ERROR: ----------------End of Error Message -------send entire > error message to petsc-maint at mcs.anl.gov---------- > Rank 0 [Thu Mar 30 14:04:35 2017] [c0-0c0s5n0] application called > MPI_Abort(MPI_COMM_WORLD, 63) - process 0 > srun: error: nid00020: task 0: Aborted > srun: Terminating job step 4363145.1z > > it seems to me the PETSc from this module is not registering the > '-da_refine' entry. This is strange because I have no issue with this on > the latest petsc-dev version, anyone know about this error and/or why it > happens? > > On Thu, Mar 30, 2017 at 3:39 PM, Matthew Knepley > wrote: > >> On Thu, Mar 30, 2017 at 3:38 PM, Justin Chang >> wrote: >> >>> Okay, got it. What are the options for setting GAMG as the coarse solver? >>> >> >> -mg_coarse_pc_type gamg I think >> >> >>> On Thu, Mar 30, 2017 at 3:37 PM, Matthew Knepley >>> wrote: >>> >>>> On Thu, Mar 30, 2017 at 3:04 PM, Justin Chang >>>> wrote: >>>> >>>>> Yeah based on my experiments it seems setting pc_mg_levels to >>>>> $DAREFINE + 1 has decent performance. >>>>> >>>>> 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In >>>>> some of the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/ >>>>> petsc/documentation/tutorials/TutorialCEMRACS2016.pdf on slide >>>>> 203/227) they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran >>>>> this, it was almost twice as slow as if $MGLEVELS >= $DAREFINE >>>>> >>>> >>>> Depending on how big the initial grid is, you may want this. There is a >>>> balance between coarse grid and fine grid work. >>>> >>>> >>>>> 2) So I understand that one cannot refine further than one grid point >>>>> in each direction, but is there any benefit to having $MGLEVELS > $DAREFINE >>>>> by a lot? >>>>> >>>> >>>> Again, it depends on the size of the initial grid. >>>> >>>> On really large problems, you want to use GAMG as the coarse solver, >>>> which will move the problem onto a smaller number of nodes >>>> so that you can coarsen further. >>>> >>>> Matt >>>> >>>> >>>>> Thanks, >>>>> Justin >>>>> >>>>> On Thu, Mar 30, 2017 at 2:35 PM, Barry Smith >>>>> wrote: >>>>> >>>>>> >>>>>> -da_refine $DAREFINE determines how large the final problem will >>>>>> be. >>>>>> >>>>>> By default if you don't supply pc_mg_levels then it uses $DAREFINE >>>>>> + 1 as the number of levels of MG to use; for example -da_refine 1 would >>>>>> result in 2 levels of multigrid. >>>>>> >>>>>> >>>>>> > On Mar 30, 2017, at 2:17 PM, Justin Chang >>>>>> wrote: >>>>>> > >>>>>> > Hi all, >>>>>> > >>>>>> > Just a general conceptual question: say I am tinkering around with >>>>>> SNES ex48.c and am running the program with these options: >>>>>> > >>>>>> > mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED >>>>>> -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS >>>>>> > >>>>>> > I am not too familiar with mg, but it seems to me there is a very >>>>>> strong correlation between $MGLEVELS and $DAREFINE as well as perhaps even >>>>>> the initial coarse grid size (provided by $X/YZSEED). >>>>>> > >>>>>> > Is there a rule of thumb on how these parameters should be? I am >>>>>> guessing it probably is also hardware/architectural dependent? >>>>>> > >>>>>> > Thanks, >>>>>> > Justin >>>>>> >>>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 30 16:21:20 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Mar 2017 16:21:20 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: I think its now -dm_refine Matt On Thu, Mar 30, 2017 at 4:17 PM, Justin Chang wrote: > Also, this was the output before the error message: > > Level 0 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 5 x 5 > x 3 (75), size (m) 2000. x 2000. x 500. > Level -1 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 2 x > 2 x 2 (8), size (m) 5000. x 5000. x 1000. > Level -2 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 1 x > 1 x 1 (1), size (m) 10000. x 10000. x inf. > > Which tells me '-da_refine 4' is not registering > > On Thu, Mar 30, 2017 at 4:15 PM, Justin Chang wrote: > >> Okay I'll give it a shot. >> >> Somewhat unrelated, but I tried running this on Cori's Haswell node >> (loaded the module 'petsc/3.7.4-64'). But I get these errors: >> >> [0]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> [0]PETSC ERROR: Argument out of range >> [0]PETSC ERROR: Partition in y direction is too fine! 0 1 >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >> for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.7.4, Oct, 02, 2016 >> [0]PETSC ERROR: /global/u1/j/jychang/Icesheet/./ex48 on a >> arch-cori-opt64-INTEL-3.7.4-64 named nid00020 by jychang Thu Mar 30 >> 14:04:35 2017 >> [0]PETSC ERROR: Configure options --known-sizeof-void-p=8 >> --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 >> --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-size_t=8 >> --known-mpi-int64_t=1 --known-has-attribute-aligned=1 >> --prefix=/global/common/cori/software/petsc/3.7.4-64/hsw/intel >> PETSC_ARCH=arch-cori-opt64-INTEL-3.7.4-64 --COPTFLAGS="-mkl -O2 -no-ipo >> -g -axMIC-AVX512,CORE-AVX2,AVX" --CXXOPTFLAGS="-mkl -O2 -no-ipo -g >> -axMIC-AVX512,CORE-AVX2,AVX" --FOPTFLAGS="-mkl -O2 -no-ipo -g >> -axMIC-AVX512,CORE-AVX2,AVX" --with-hdf5-dir=/opt/cray/pe/hdf5-parallel/1.8.16/INTEL/15.0 >> --with-hwloc-dir=/global/common/cori/software/hwloc/1.11.4/hsw >> --with-scalapack-include=/opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/include >> --with-scalapack-lib= --LIBS="-mkl -L/global/common/cori/software/petsc/3.7.4-64/hsw/intel/lib >> -I/global/common/cori/software/petsc/3.7.4-64/hsw/intel/include >> -L/global/common/cori/software/xz/5.2.2/hsw/lib >> -I/global/common/cori/software/xz/5.2.2/hsw/include >> -L/global/common/cori/software/zlib/1.2.8/hsw/intel/lib >> -I/global/common/cori/software/zlib/1.2.8/hsw/intel/include >> -L/global/common/cori/software/libxml2/2.9.4/hsw/lib >> -I/global/common/cori/software/libxml2/2.9.4/hsw/include >> -L/global/common/cori/software/numactl/2.0.11/hsw/lib >> -I/global/common/cori/software/numactl/2.0.11/hsw/include >> -L/global/common/cori/software/hwloc/1.11.4/hsw/lib >> -I/global/common/cori/software/hwloc/1.11.4/hsw/include >> -L/global/common/cori/software/openssl/1.1.0a/hsw/lib >> -I/global/common/cori/software/openssl/1.1.0a/hsw/include >> -L/global/common/cori/software/subversion/1.9.4/hsw/lib >> -I/global/common/cori/software/subversion/1.9.4/hsw/include -lhwloc >> -lpciaccess -lxml2 -lz -llzma -Wl,--start-group >> /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/ >> intel64/libmkl_scalapack_lp64.a /opt/intel/compilers_and_libra >> ries_2017.1.132/linux/mkl/lib/intel64/libmkl_core.a >> /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_intel_thread.a >> /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/ >> intel64/libmkl_blacs_intelmpi_lp64.a -Wl,--end-group -lstdc++" >> --download-parmetis --download-metis --with-ssl=0 --with-batch >> --known-mpi-shared-libraries=0 --with-clib-autodetect=0 >> --with-cxxlib-autodetect=0 --with-debugging=0 >> --with-fortranlib-autodetect=0 --with-mpiexec=srun >> --with-shared-libraries=0 --with-x=0 --known-mpi-int64-t=0 >> --known-bits-per-byte=8 --known-sdot-returns-double=0 >> --known-snrm2-returns-double=0 --known-level1-dcache-assoc=0 >> --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 >> --known-memcmp-ok=1 --known-mpi-c-double-complex=1 >> --known-mpi-long-double=1 --known-sizeof-MPI_Comm=4 >> --known-sizeof-MPI_Fint=4 --known-sizeof-char=1 --known-sizeof-double=8 >> CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn F77=ftn F90=ftn MPIF90=ftn >> MPIF77=ftn CFLAGS=-axMIC-AVX512,CORE-AVX2,AVX >> CXXFLAGS=-axMIC-AVX512,CORE-AVX2,AVX FFLAGS=-axMIC-AVX512,CORE-AVX2,AVX >> CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn F77=ftn F90=ftn MPIF90=ftn >> MPIF77=ftn CFLAGS=-fPIC FFLAGS=-fPIC LDFLAGS=-fPIE --download-hypre >> --with-64-bit-indices >> [0]PETSC ERROR: #1 DMSetUp_DA_3D() line 298 in >> /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/da3.c >> [0]PETSC ERROR: #2 DMSetUp_DA() line 27 in /global/cscratch1/sd/swowner/s >> leak/petsc-3.7.4/src/dm/impls/da/dareg.c >> [0]PETSC ERROR: #3 DMSetUp() line 744 in /global/cscratch1/sd/swowner/s >> leak/petsc-3.7.4/src/dm/interface/dm.c >> [0]PETSC ERROR: #4 DMCoarsen_DA() line 1196 in >> /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/da.c >> [0]PETSC ERROR: #5 DMCoarsen() line 2371 in /global/cscratch1/sd/swowner/s >> leak/petsc-3.7.4/src/dm/interface/dm.c >> [0]PETSC ERROR: #6 PCSetUp_MG() line 616 in /global/cscratch1/sd/swowner/s >> leak/petsc-3.7.4/src/ksp/pc/impls/mg/mg.c >> [0]PETSC ERROR: #7 PCSetUp() line 968 in /global/cscratch1/sd/swowner/s >> leak/petsc-3.7.4/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: #8 KSPSetUp() line 390 in /global/cscratch1/sd/swowner/s >> leak/petsc-3.7.4/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: #9 KSPSolve() line 599 in /global/cscratch1/sd/swowner/s >> leak/petsc-3.7.4/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: #10 SNESSolve_NEWTONLS() line 230 in >> /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/snes/impls/ls/ls.c >> [0]PETSC ERROR: #11 SNESSolve() line 4005 in >> /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/snes/interface/snes.c >> [0]PETSC ERROR: #12 main() line 1548 in /global/homes/j/jychang/Iceshe >> et/ex48.c >> [0]PETSC ERROR: PETSc Option Table entries: >> [0]PETSC ERROR: -da_refine 4 >> [0]PETSC ERROR: -ksp_rtol 1e-7 >> [0]PETSC ERROR: -M 5 >> [0]PETSC ERROR: -N 5 >> [0]PETSC ERROR: -P 3 >> [0]PETSC ERROR: -pc_mg_levels 5 >> [0]PETSC ERROR: -pc_type mg >> [0]PETSC ERROR: -thi_mat_type baij >> [0]PETSC ERROR: ----------------End of Error Message -------send entire >> error message to petsc-maint at mcs.anl.gov---------- >> Rank 0 [Thu Mar 30 14:04:35 2017] [c0-0c0s5n0] application called >> MPI_Abort(MPI_COMM_WORLD, 63) - process 0 >> srun: error: nid00020: task 0: Aborted >> srun: Terminating job step 4363145.1z >> >> it seems to me the PETSc from this module is not registering the >> '-da_refine' entry. This is strange because I have no issue with this on >> the latest petsc-dev version, anyone know about this error and/or why it >> happens? >> >> On Thu, Mar 30, 2017 at 3:39 PM, Matthew Knepley >> wrote: >> >>> On Thu, Mar 30, 2017 at 3:38 PM, Justin Chang >>> wrote: >>> >>>> Okay, got it. What are the options for setting GAMG as the coarse >>>> solver? >>>> >>> >>> -mg_coarse_pc_type gamg I think >>> >>> >>>> On Thu, Mar 30, 2017 at 3:37 PM, Matthew Knepley >>>> wrote: >>>> >>>>> On Thu, Mar 30, 2017 at 3:04 PM, Justin Chang >>>>> wrote: >>>>> >>>>>> Yeah based on my experiments it seems setting pc_mg_levels to >>>>>> $DAREFINE + 1 has decent performance. >>>>>> >>>>>> 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In >>>>>> some of the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/ >>>>>> petsc/documentation/tutorials/TutorialCEMRACS2016.pdf on slide >>>>>> 203/227) they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran >>>>>> this, it was almost twice as slow as if $MGLEVELS >= $DAREFINE >>>>>> >>>>> >>>>> Depending on how big the initial grid is, you may want this. There is >>>>> a balance between coarse grid and fine grid work. >>>>> >>>>> >>>>>> 2) So I understand that one cannot refine further than one grid point >>>>>> in each direction, but is there any benefit to having $MGLEVELS > $DAREFINE >>>>>> by a lot? >>>>>> >>>>> >>>>> Again, it depends on the size of the initial grid. >>>>> >>>>> On really large problems, you want to use GAMG as the coarse solver, >>>>> which will move the problem onto a smaller number of nodes >>>>> so that you can coarsen further. >>>>> >>>>> Matt >>>>> >>>>> >>>>>> Thanks, >>>>>> Justin >>>>>> >>>>>> On Thu, Mar 30, 2017 at 2:35 PM, Barry Smith >>>>>> wrote: >>>>>> >>>>>>> >>>>>>> -da_refine $DAREFINE determines how large the final problem will >>>>>>> be. >>>>>>> >>>>>>> By default if you don't supply pc_mg_levels then it uses $DAREFINE >>>>>>> + 1 as the number of levels of MG to use; for example -da_refine 1 would >>>>>>> result in 2 levels of multigrid. >>>>>>> >>>>>>> >>>>>>> > On Mar 30, 2017, at 2:17 PM, Justin Chang >>>>>>> wrote: >>>>>>> > >>>>>>> > Hi all, >>>>>>> > >>>>>>> > Just a general conceptual question: say I am tinkering around with >>>>>>> SNES ex48.c and am running the program with these options: >>>>>>> > >>>>>>> > mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED >>>>>>> -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS >>>>>>> > >>>>>>> > I am not too familiar with mg, but it seems to me there is a very >>>>>>> strong correlation between $MGLEVELS and $DAREFINE as well as perhaps even >>>>>>> the initial coarse grid size (provided by $X/YZSEED). >>>>>>> > >>>>>>> > Is there a rule of thumb on how these parameters should be? I am >>>>>>> guessing it probably is also hardware/architectural dependent? >>>>>>> > >>>>>>> > Thanks, >>>>>>> > Justin >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> What most experimenters take for granted before they begin their >>>>> experiments is infinitely more interesting than any results to which their >>>>> experiments lead. >>>>> -- Norbert Wiener >>>>> >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jychang48 at gmail.com Thu Mar 30 16:27:34 2017 From: jychang48 at gmail.com (Justin Chang) Date: Thu, 30 Mar 2017 16:27:34 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: -dm_refine didn't work either On Thu, Mar 30, 2017 at 4:21 PM, Matthew Knepley wrote: > I think its now -dm_refine > > Matt > > On Thu, Mar 30, 2017 at 4:17 PM, Justin Chang wrote: > >> Also, this was the output before the error message: >> >> Level 0 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 5 x >> 5 x 3 (75), size (m) 2000. x 2000. x 500. >> Level -1 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 2 x >> 2 x 2 (8), size (m) 5000. x 5000. x 1000. >> Level -2 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 1 x >> 1 x 1 (1), size (m) 10000. x 10000. x inf. >> >> Which tells me '-da_refine 4' is not registering >> >> On Thu, Mar 30, 2017 at 4:15 PM, Justin Chang >> wrote: >> >>> Okay I'll give it a shot. >>> >>> Somewhat unrelated, but I tried running this on Cori's Haswell node >>> (loaded the module 'petsc/3.7.4-64'). But I get these errors: >>> >>> [0]PETSC ERROR: --------------------- Error Message >>> -------------------------------------------------------------- >>> [0]PETSC ERROR: Argument out of range >>> [0]PETSC ERROR: Partition in y direction is too fine! 0 1 >>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html >>> for trouble shooting. >>> [0]PETSC ERROR: Petsc Release Version 3.7.4, Oct, 02, 2016 >>> [0]PETSC ERROR: /global/u1/j/jychang/Icesheet/./ex48 on a >>> arch-cori-opt64-INTEL-3.7.4-64 named nid00020 by jychang Thu Mar 30 >>> 14:04:35 2017 >>> [0]PETSC ERROR: Configure options --known-sizeof-void-p=8 >>> --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 >>> --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-size_t=8 >>> --known-mpi-int64_t=1 --known-has-attribute-aligned=1 >>> --prefix=/global/common/cori/software/petsc/3.7.4-64/hsw/intel >>> PETSC_ARCH=arch-cori-opt64-INTEL-3.7.4-64 --COPTFLAGS="-mkl -O2 -no-ipo >>> -g -axMIC-AVX512,CORE-AVX2,AVX" --CXXOPTFLAGS="-mkl -O2 -no-ipo -g >>> -axMIC-AVX512,CORE-AVX2,AVX" --FOPTFLAGS="-mkl -O2 -no-ipo -g >>> -axMIC-AVX512,CORE-AVX2,AVX" --with-hdf5-dir=/opt/cray/pe/hdf5-parallel/1.8.16/INTEL/15.0 >>> --with-hwloc-dir=/global/common/cori/software/hwloc/1.11.4/hsw >>> --with-scalapack-include=/opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/include >>> --with-scalapack-lib= --LIBS="-mkl -L/global/common/cori/software/petsc/3.7.4-64/hsw/intel/lib >>> -I/global/common/cori/software/petsc/3.7.4-64/hsw/intel/include >>> -L/global/common/cori/software/xz/5.2.2/hsw/lib >>> -I/global/common/cori/software/xz/5.2.2/hsw/include >>> -L/global/common/cori/software/zlib/1.2.8/hsw/intel/lib >>> -I/global/common/cori/software/zlib/1.2.8/hsw/intel/include >>> -L/global/common/cori/software/libxml2/2.9.4/hsw/lib >>> -I/global/common/cori/software/libxml2/2.9.4/hsw/include >>> -L/global/common/cori/software/numactl/2.0.11/hsw/lib >>> -I/global/common/cori/software/numactl/2.0.11/hsw/include >>> -L/global/common/cori/software/hwloc/1.11.4/hsw/lib >>> -I/global/common/cori/software/hwloc/1.11.4/hsw/include >>> -L/global/common/cori/software/openssl/1.1.0a/hsw/lib >>> -I/global/common/cori/software/openssl/1.1.0a/hsw/include >>> -L/global/common/cori/software/subversion/1.9.4/hsw/lib >>> -I/global/common/cori/software/subversion/1.9.4/hsw/include -lhwloc >>> -lpciaccess -lxml2 -lz -llzma -Wl,--start-group >>> /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/ >>> intel64/libmkl_scalapack_lp64.a /opt/intel/compilers_and_libra >>> ries_2017.1.132/linux/mkl/lib/intel64/libmkl_core.a >>> /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_intel_thread.a >>> /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/ >>> intel64/libmkl_blacs_intelmpi_lp64.a -Wl,--end-group -lstdc++" >>> --download-parmetis --download-metis --with-ssl=0 --with-batch >>> --known-mpi-shared-libraries=0 --with-clib-autodetect=0 >>> --with-cxxlib-autodetect=0 --with-debugging=0 >>> --with-fortranlib-autodetect=0 --with-mpiexec=srun >>> --with-shared-libraries=0 --with-x=0 --known-mpi-int64-t=0 >>> --known-bits-per-byte=8 --known-sdot-returns-double=0 >>> --known-snrm2-returns-double=0 --known-level1-dcache-assoc=0 >>> --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 >>> --known-memcmp-ok=1 --known-mpi-c-double-complex=1 >>> --known-mpi-long-double=1 --known-sizeof-MPI_Comm=4 >>> --known-sizeof-MPI_Fint=4 --known-sizeof-char=1 --known-sizeof-double=8 >>> CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn F77=ftn F90=ftn MPIF90=ftn >>> MPIF77=ftn CFLAGS=-axMIC-AVX512,CORE-AVX2,AVX >>> CXXFLAGS=-axMIC-AVX512,CORE-AVX2,AVX FFLAGS=-axMIC-AVX512,CORE-AVX2,AVX >>> CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn F77=ftn F90=ftn MPIF90=ftn >>> MPIF77=ftn CFLAGS=-fPIC FFLAGS=-fPIC LDFLAGS=-fPIE --download-hypre >>> --with-64-bit-indices >>> [0]PETSC ERROR: #1 DMSetUp_DA_3D() line 298 in >>> /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/da3.c >>> [0]PETSC ERROR: #2 DMSetUp_DA() line 27 in /global/cscratch1/sd/swowner/s >>> leak/petsc-3.7.4/src/dm/impls/da/dareg.c >>> [0]PETSC ERROR: #3 DMSetUp() line 744 in /global/cscratch1/sd/swowner/s >>> leak/petsc-3.7.4/src/dm/interface/dm.c >>> [0]PETSC ERROR: #4 DMCoarsen_DA() line 1196 in >>> /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/da.c >>> [0]PETSC ERROR: #5 DMCoarsen() line 2371 in >>> /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/interface/dm.c >>> [0]PETSC ERROR: #6 PCSetUp_MG() line 616 in >>> /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/ksp/pc/impls/mg/mg.c >>> [0]PETSC ERROR: #7 PCSetUp() line 968 in /global/cscratch1/sd/swowner/s >>> leak/petsc-3.7.4/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: #8 KSPSetUp() line 390 in /global/cscratch1/sd/swowner/s >>> leak/petsc-3.7.4/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: #9 KSPSolve() line 599 in /global/cscratch1/sd/swowner/s >>> leak/petsc-3.7.4/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: #10 SNESSolve_NEWTONLS() line 230 in >>> /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/snes/impls/ls/ls.c >>> [0]PETSC ERROR: #11 SNESSolve() line 4005 in >>> /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/snes/interface/snes.c >>> [0]PETSC ERROR: #12 main() line 1548 in /global/homes/j/jychang/Iceshe >>> et/ex48.c >>> [0]PETSC ERROR: PETSc Option Table entries: >>> [0]PETSC ERROR: -da_refine 4 >>> [0]PETSC ERROR: -ksp_rtol 1e-7 >>> [0]PETSC ERROR: -M 5 >>> [0]PETSC ERROR: -N 5 >>> [0]PETSC ERROR: -P 3 >>> [0]PETSC ERROR: -pc_mg_levels 5 >>> [0]PETSC ERROR: -pc_type mg >>> [0]PETSC ERROR: -thi_mat_type baij >>> [0]PETSC ERROR: ----------------End of Error Message -------send entire >>> error message to petsc-maint at mcs.anl.gov---------- >>> Rank 0 [Thu Mar 30 14:04:35 2017] [c0-0c0s5n0] application called >>> MPI_Abort(MPI_COMM_WORLD, 63) - process 0 >>> srun: error: nid00020: task 0: Aborted >>> srun: Terminating job step 4363145.1z >>> >>> it seems to me the PETSc from this module is not registering the >>> '-da_refine' entry. This is strange because I have no issue with this on >>> the latest petsc-dev version, anyone know about this error and/or why it >>> happens? >>> >>> On Thu, Mar 30, 2017 at 3:39 PM, Matthew Knepley >>> wrote: >>> >>>> On Thu, Mar 30, 2017 at 3:38 PM, Justin Chang >>>> wrote: >>>> >>>>> Okay, got it. What are the options for setting GAMG as the coarse >>>>> solver? >>>>> >>>> >>>> -mg_coarse_pc_type gamg I think >>>> >>>> >>>>> On Thu, Mar 30, 2017 at 3:37 PM, Matthew Knepley >>>>> wrote: >>>>> >>>>>> On Thu, Mar 30, 2017 at 3:04 PM, Justin Chang >>>>>> wrote: >>>>>> >>>>>>> Yeah based on my experiments it seems setting pc_mg_levels to >>>>>>> $DAREFINE + 1 has decent performance. >>>>>>> >>>>>>> 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In >>>>>>> some of the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/ >>>>>>> petsc/documentation/tutorials/TutorialCEMRACS2016.pdf on slide >>>>>>> 203/227) they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran >>>>>>> this, it was almost twice as slow as if $MGLEVELS >= $DAREFINE >>>>>>> >>>>>> >>>>>> Depending on how big the initial grid is, you may want this. There is >>>>>> a balance between coarse grid and fine grid work. >>>>>> >>>>>> >>>>>>> 2) So I understand that one cannot refine further than one grid >>>>>>> point in each direction, but is there any benefit to having $MGLEVELS > >>>>>>> $DAREFINE by a lot? >>>>>>> >>>>>> >>>>>> Again, it depends on the size of the initial grid. >>>>>> >>>>>> On really large problems, you want to use GAMG as the coarse solver, >>>>>> which will move the problem onto a smaller number of nodes >>>>>> so that you can coarsen further. >>>>>> >>>>>> Matt >>>>>> >>>>>> >>>>>>> Thanks, >>>>>>> Justin >>>>>>> >>>>>>> On Thu, Mar 30, 2017 at 2:35 PM, Barry Smith >>>>>>> wrote: >>>>>>> >>>>>>>> >>>>>>>> -da_refine $DAREFINE determines how large the final problem will >>>>>>>> be. >>>>>>>> >>>>>>>> By default if you don't supply pc_mg_levels then it uses >>>>>>>> $DAREFINE + 1 as the number of levels of MG to use; for example -da_refine >>>>>>>> 1 would result in 2 levels of multigrid. >>>>>>>> >>>>>>>> >>>>>>>> > On Mar 30, 2017, at 2:17 PM, Justin Chang >>>>>>>> wrote: >>>>>>>> > >>>>>>>> > Hi all, >>>>>>>> > >>>>>>>> > Just a general conceptual question: say I am tinkering around >>>>>>>> with SNES ex48.c and am running the program with these options: >>>>>>>> > >>>>>>>> > mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED >>>>>>>> -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS >>>>>>>> > >>>>>>>> > I am not too familiar with mg, but it seems to me there is a very >>>>>>>> strong correlation between $MGLEVELS and $DAREFINE as well as perhaps even >>>>>>>> the initial coarse grid size (provided by $X/YZSEED). >>>>>>>> > >>>>>>>> > Is there a rule of thumb on how these parameters should be? I am >>>>>>>> guessing it probably is also hardware/architectural dependent? >>>>>>>> > >>>>>>>> > Thanks, >>>>>>>> > Justin >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> What most experimenters take for granted before they begin their >>>>>> experiments is infinitely more interesting than any results to which their >>>>>> experiments lead. >>>>>> -- Norbert Wiener >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Mar 30 16:38:23 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 30 Mar 2017 16:38:23 -0500 Subject: [petsc-users] understanding snes_view output In-Reply-To: References: Message-ID: > On Mar 30, 2017, at 3:02 PM, Gideon Simpson wrote: > > When running something with -snes_monitor and -snes_view, I see two sets of numbers that I'm trying to understand (see below). > > The first is the sequence X SNES Function norm, with X going from 0 to 3. I had interpreted this as saying that it takes 3 steps of Newton, though perhaps this is not the case. This is correct. > > The next is "total number of linear solves=4" and "total number of function evaluations=31". How do these numbers relegate to the SNES Function norm statements? Also, I was surprised by the number of function evaluations given that I specify a SNESSetJacobian in the problem. Each Newton step takes exactly one linear solve but one or more __linear solver iterations__ Each Newton step requires at a minimum 1 function evaluation. The line search may take any number of additional function evaluations (different types of line search will take more or less function evaluations). You can run with -ksp_monitor -snes_linesearch_monitor to get more details about linear solver iterations and the line search steps The number of function evaluations is "high" for thee Newton steps. > > 0 SNES Function norm 7.630295941712e-03 > 1 SNES Function norm 3.340185037212e-06 > 2 SNES Function norm 1.310176068229e-13 > 3 SNES Function norm 1.464821375527e-14 > SNES Object: 4 MPI processes > type: newtonls > maximum iterations=50, maximum function evaluations=10000 > tolerances: relative=1e-15, absolute=1e-50, solution=1e-15 > total number of linear solver iterations=4 > total number of function evaluations=31 > norm schedule ALWAYS > SNESLineSearch Object: 4 MPI processes > type: bt > interpolation: cubic > alpha=1.000000e-04 > maxstep=1.000000e+08, minlambda=1.000000e-12 > tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 > maximum iterations=40 > KSP Object: 4 MPI processes > type: gmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using PRECONDITIONED norm type for convergence test > PC Object: 4 MPI processes > type: bjacobi > block Jacobi: number of blocks = 4 > Local solve is same for all blocks, in the following KSP and PC objects: > KSP Object: (sub_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > left preconditioning > using NONE norm type for convergence test > PC Object: (sub_) 1 MPI processes > type: ilu > ILU: out-of-place factorization > 0 levels of fill > tolerance for zero pivot 2.22045e-14 > matrix ordering: natural > factor fill ratio given 1, needed 1 > Factored matrix follows: > Mat Object: 1 MPI processes > type: seqaij > rows=2, cols=2 > package used to perform factorization: petsc > total: nonzeros=4, allocated nonzeros=4 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 1 nodes, limit used is 5 > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=2, cols=2 > total: nonzeros=4, allocated nonzeros=10 > total number of mallocs used during MatSetValues calls =0 > using I-node routines: found 1 nodes, limit used is 5 > linear system matrix = precond matrix: > Mat Object: 4 MPI processes > type: mpiaij > rows=2, cols=2 > total: nonzeros=4, allocated nonzeros=20 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 1 nodes, limit used is 5 > > -gideon From bsmith at mcs.anl.gov Thu Mar 30 17:11:29 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 30 Mar 2017 17:11:29 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: You should always work with master git branch. Tar balls and releases are for people who learned Unix on Digital Vaxen. And never try to jump back and forth between master and releases ("just because the release was already installed on some machine"), that will drive you nuts. There was some changes in the handling of options for DM (and DMDA) since the 3.7 release. Previously if you passed negative numbers for the grid spacing variables and DMDA would check the options database and allow the values to be changed. But if you passed positive values the options would be ignored. This was recently fixed to use the standard PETSc paradigm of DMSetFromOptions() DMSetUp() after the call to DMDACreate...(). Barry > On Mar 30, 2017, at 4:27 PM, Justin Chang wrote: > > -dm_refine didn't work either > > On Thu, Mar 30, 2017 at 4:21 PM, Matthew Knepley wrote: > I think its now -dm_refine > > Matt > > On Thu, Mar 30, 2017 at 4:17 PM, Justin Chang wrote: > Also, this was the output before the error message: > > Level 0 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 5 x 5 x 3 (75), size (m) 2000. x 2000. x 500. > Level -1 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 2 x 2 x 2 (8), size (m) 5000. x 5000. x 1000. > Level -2 domain size (m) 1e+04 x 1e+04 x 1e+03, num elements 1 x 1 x 1 (1), size (m) 10000. x 10000. x inf. > > Which tells me '-da_refine 4' is not registering > > On Thu, Mar 30, 2017 at 4:15 PM, Justin Chang wrote: > Okay I'll give it a shot. > > Somewhat unrelated, but I tried running this on Cori's Haswell node (loaded the module 'petsc/3.7.4-64'). But I get these errors: > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Argument out of range > [0]PETSC ERROR: Partition in y direction is too fine! 0 1 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.4, Oct, 02, 2016 > [0]PETSC ERROR: /global/u1/j/jychang/Icesheet/./ex48 on a arch-cori-opt64-INTEL-3.7.4-64 named nid00020 by jychang Thu Mar 30 14:04:35 2017 > [0]PETSC ERROR: Configure options --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-size_t=8 --known-mpi-int64_t=1 --known-has-attribute-aligned=1 --prefix=/global/common/cori/software/petsc/3.7.4-64/hsw/intel PETSC_ARCH=arch-cori-opt64-INTEL-3.7.4-64 --COPTFLAGS="-mkl -O2 -no-ipo -g -axMIC-AVX512,CORE-AVX2,AVX" --CXXOPTFLAGS="-mkl -O2 -no-ipo -g -axMIC-AVX512,CORE-AVX2,AVX" --FOPTFLAGS="-mkl -O2 -no-ipo -g -axMIC-AVX512,CORE-AVX2,AVX" --with-hdf5-dir=/opt/cray/pe/hdf5-parallel/1.8.16/INTEL/15.0 --with-hwloc-dir=/global/common/cori/software/hwloc/1.11.4/hsw --with-scalapack-include=/opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/include --with-scalapack-lib= --LIBS="-mkl -L/global/common/cori/software/petsc/3.7.4-64/hsw/intel/lib -I/global/common/cori/software/petsc/3.7.4-64/hsw/intel/include -L/global/common/cori/software/xz/5.2.2/hsw/lib -I/global/common/cori/software/xz/5.2.2/hsw/include -L/global/common/cori/software/zlib/1.2.8/hsw/intel/lib -I/global/common/cori/software/zlib/1.2.8/hsw/intel/include -L/global/common/cori/software/libxml2/2.9.4/hsw/lib -I/global/common/cori/software/libxml2/2.9.4/hsw/include -L/global/common/cori/software/numactl/2.0.11/hsw/lib -I/global/common/cori/software/numactl/2.0.11/hsw/include -L/global/common/cori/software/hwloc/1.11.4/hsw/lib -I/global/common/cori/software/hwloc/1.11.4/hsw/include -L/global/common/cori/software/openssl/1.1.0a/hsw/lib -I/global/common/cori/software/openssl/1.1.0a/hsw/include -L/global/common/cori/software/subversion/1.9.4/hsw/lib -I/global/common/cori/software/subversion/1.9.4/hsw/include -lhwloc -lpciaccess -lxml2 -lz -llzma -Wl,--start-group /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_scalapack_lp64.a /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_core.a /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_intel_thread.a /opt/intel/compilers_and_libraries_2017.1.132/linux/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.a -Wl,--end-group -lstdc++" --download-parmetis --download-metis --with-ssl=0 --with-batch --known-mpi-shared-libraries=0 --with-clib-autodetect=0 --with-cxxlib-autodetect=0 --with-debugging=0 --with-fortranlib-autodetect=0 --with-mpiexec=srun --with-shared-libraries=0 --with-x=0 --known-mpi-int64-t=0 --known-bits-per-byte=8 --known-sdot-returns-double=0 --known-snrm2-returns-double=0 --known-level1-dcache-assoc=0 --known-level1-dcache-linesize=32 --known-level1-dcache-size=32768 --known-memcmp-ok=1 --known-mpi-c-double-complex=1 --known-mpi-long-double=1 --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 --known-sizeof-char=1 --known-sizeof-double=8 CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn F77=ftn F90=ftn MPIF90=ftn MPIF77=ftn CFLAGS=-axMIC-AVX512,CORE-AVX2,AVX CXXFLAGS=-axMIC-AVX512,CORE-AVX2,AVX FFLAGS=-axMIC-AVX512,CORE-AVX2,AVX CC=cc MPICC=cc CXX=CC MPICXX=CC FC=ftn F77=ftn F90=ftn MPIF90=ftn MPIF77=ftn CFLAGS=-fPIC FFLAGS=-fPIC LDFLAGS=-fPIE --download-hypre --with-64-bit-indices > [0]PETSC ERROR: #1 DMSetUp_DA_3D() line 298 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/da3.c > [0]PETSC ERROR: #2 DMSetUp_DA() line 27 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/dareg.c > [0]PETSC ERROR: #3 DMSetUp() line 744 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/interface/dm.c > [0]PETSC ERROR: #4 DMCoarsen_DA() line 1196 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/impls/da/da.c > [0]PETSC ERROR: #5 DMCoarsen() line 2371 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/dm/interface/dm.c > [0]PETSC ERROR: #6 PCSetUp_MG() line 616 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/ksp/pc/impls/mg/mg.c > [0]PETSC ERROR: #7 PCSetUp() line 968 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #8 KSPSetUp() line 390 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #9 KSPSolve() line 599 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #10 SNESSolve_NEWTONLS() line 230 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/snes/impls/ls/ls.c > [0]PETSC ERROR: #11 SNESSolve() line 4005 in /global/cscratch1/sd/swowner/sleak/petsc-3.7.4/src/snes/interface/snes.c > [0]PETSC ERROR: #12 main() line 1548 in /global/homes/j/jychang/Icesheet/ex48.c > [0]PETSC ERROR: PETSc Option Table entries: > [0]PETSC ERROR: -da_refine 4 > [0]PETSC ERROR: -ksp_rtol 1e-7 > [0]PETSC ERROR: -M 5 > [0]PETSC ERROR: -N 5 > [0]PETSC ERROR: -P 3 > [0]PETSC ERROR: -pc_mg_levels 5 > [0]PETSC ERROR: -pc_type mg > [0]PETSC ERROR: -thi_mat_type baij > [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov---------- > Rank 0 [Thu Mar 30 14:04:35 2017] [c0-0c0s5n0] application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0 > srun: error: nid00020: task 0: Aborted > srun: Terminating job step 4363145.1z > > it seems to me the PETSc from this module is not registering the '-da_refine' entry. This is strange because I have no issue with this on the latest petsc-dev version, anyone know about this error and/or why it happens? > > On Thu, Mar 30, 2017 at 3:39 PM, Matthew Knepley wrote: > On Thu, Mar 30, 2017 at 3:38 PM, Justin Chang wrote: > Okay, got it. What are the options for setting GAMG as the coarse solver? > > -mg_coarse_pc_type gamg I think > > On Thu, Mar 30, 2017 at 3:37 PM, Matthew Knepley wrote: > On Thu, Mar 30, 2017 at 3:04 PM, Justin Chang wrote: > Yeah based on my experiments it seems setting pc_mg_levels to $DAREFINE + 1 has decent performance. > > 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In some of the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/petsc/documentation/tutorials/TutorialCEMRACS2016.pdf on slide 203/227) they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran this, it was almost twice as slow as if $MGLEVELS >= $DAREFINE > > Depending on how big the initial grid is, you may want this. There is a balance between coarse grid and fine grid work. > > 2) So I understand that one cannot refine further than one grid point in each direction, but is there any benefit to having $MGLEVELS > $DAREFINE by a lot? > > Again, it depends on the size of the initial grid. > > On really large problems, you want to use GAMG as the coarse solver, which will move the problem onto a smaller number of nodes > so that you can coarsen further. > > Matt > > Thanks, > Justin > > On Thu, Mar 30, 2017 at 2:35 PM, Barry Smith wrote: > > -da_refine $DAREFINE determines how large the final problem will be. > > By default if you don't supply pc_mg_levels then it uses $DAREFINE + 1 as the number of levels of MG to use; for example -da_refine 1 would result in 2 levels of multigrid. > > > > On Mar 30, 2017, at 2:17 PM, Justin Chang wrote: > > > > Hi all, > > > > Just a general conceptual question: say I am tinkering around with SNES ex48.c and am running the program with these options: > > > > mpirun -n $NPROCS -pc_type mg -M $XSEED -N $YSEED -P $ZSEED -thi_mat_type baij -da_refine $DAREFINE -pc_mg_levels $MGLEVELS > > > > I am not too familiar with mg, but it seems to me there is a very strong correlation between $MGLEVELS and $DAREFINE as well as perhaps even the initial coarse grid size (provided by $X/YZSEED). > > > > Is there a rule of thumb on how these parameters should be? I am guessing it probably is also hardware/architectural dependent? > > > > Thanks, > > Justin > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > From gideon.simpson at gmail.com Thu Mar 30 19:09:40 2017 From: gideon.simpson at gmail.com (Gideon Simpson) Date: Thu, 30 Mar 2017 20:09:40 -0400 Subject: [petsc-users] snes parameter suggestions Message-ID: <8D266108-EB18-4C68-B384-A5761A689F07@gmail.com> About a month ago, I mentioned that I was trying to set up a projected integration scheme within petsc, where I use a classical integrator (i.e., RK4), at each time step, and then correct my prediction dependent variable, yp, by solving a nonlinear equation g(y + lambda * f(yp)) =0 for a scalar parameter lambda. Out of stubbornness, I did this entirely within the confines of petsc, using a SNES. Following up on a comment of Barry?s, about the solver taking an excessive number of function evaluations, I realized that, in fact, the SNES was failing to converge (algorithmically), even though it was giving reasonable answers. In particular, I see output like what is displayed below. I am using the default snes/ksp solvers with default tolerances. It would seem to me that I should have been quite happy after 1 SNES iteration, given that this is a scalar problem. This can obviously be done by setting the atol to something like 1e-12, but I was curious if people had other thoughts on this. 0 SNES Function norm 5.142950291311e-10 0 KSP Residual norm 6.057087103783e-11 1 KSP Residual norm 1.681179391195e-26 Line search: Using full step: fnorm 5.142950291311e-10 gnorm 5.783398860650e-14 1 SNES Function norm 5.783398860650e-14 0 KSP Residual norm 5.520053977167e-15 1 KSP Residual norm 1.370372252609e-30 Line search: gnorm after quadratic fit 5.728578676879e-14 Line search: Quadratically determined step, lambda=3.9611360239162957e-01 2 SNES Function norm 5.728578676879e-14 0 KSP Residual norm 5.024285935857e-15 1 KSP Residual norm 2.789038964144e-31 Line search: gnorm after quadratic fit 4.278033777465e-14 Line search: Quadratically determined step, lambda=2.4691358024691357e-01 3 SNES Function norm 4.278033777465e-14 0 KSP Residual norm 3.520343148370e-15 1 KSP Residual norm 5.527264229234e-31 Line search: gnorm after quadratic fit 2.842170943040e-14 Line search: Quadratically determined step, lambda=2.5438596491228038e-01 4 SNES Function norm 2.842170943040e-14 0 KSP Residual norm 2.016428211944e-15 1 KSP Residual norm 2.238685028403e-31 Line search: gnorm after quadratic fit 5.695433295430e-14 Line search: Cubic step no good, shrinking lambda, current gnorm 4.278033777465e-14 lambda=1.0000000000000002e-02 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.0000000000000002e-03 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=5.0000000000000012e-04 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=2.1132486540518717e-04 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=9.2196144189362134e-05 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=4.0004514620095227e-05 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.7374756353482527e-05 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=7.5449506476837614e-06 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=3.2764733594125655e-06 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.4228354923470249e-06 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=6.1787855254724169e-07 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=2.6831903567985152e-07 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.1651983473611860e-07 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=5.0599733967314922e-08 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=2.1973366898757625e-08 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=9.5421223580158174e-09 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=4.1437481801087470e-09 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.7994580593128418e-09 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=7.8143004026450871e-10 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=3.3934267301617141e-10 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.4736245574944127e-10 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=6.3993405755577026e-11 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=2.7789683331288042e-11 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.2067907474762995e-11 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=5.2405919521750200e-12 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=2.2757718408626572e-12 Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=9.8827337043745462e-13 Line search: unable to find good step length! After 27 tries Line search: fnorm=2.8421709430404007e-14, gnorm=2.8421709430404007e-14, ynorm=2.0164282119435693e-15, minlambda=9.9999999999999998e-13, lambda=9.8827337043745462e-13, initial slope=-8.0779356694631465e-28 -gideon -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Mar 30 19:46:29 2017 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Mar 2017 19:46:29 -0500 Subject: [petsc-users] snes parameter suggestions In-Reply-To: <8D266108-EB18-4C68-B384-A5761A689F07@gmail.com> References: <8D266108-EB18-4C68-B384-A5761A689F07@gmail.com> Message-ID: On Thu, Mar 30, 2017 at 7:09 PM, Gideon Simpson wrote: > About a month ago, I mentioned that I was trying to set up a projected > integration scheme within petsc, where I use a classical integrator (i.e., > RK4), at each time step, and then correct my prediction dependent variable, > yp, by solving a nonlinear equation g(y + lambda * f(yp)) =0 for a scalar > parameter lambda. Out of stubbornness, I did this entirely within the > confines of petsc, using a SNES. Following up on a comment of Barry?s, > about the solver taking an excessive number of function evaluations, I > realized that, in fact, the SNES was failing to converge (algorithmically), > even though it was giving reasonable answers. In particular, I see output > like what is displayed below. > > I am using the default snes/ksp solvers with default tolerances. It would > seem to me that I should have been quite happy after 1 SNES iteration, > given that this is a scalar problem. This can obviously be done by setting > the atol to something like 1e-12, but I was curious if people had other > thoughts on this. > I think its possible that your Jacobian is not accurate enough to converge below 1e-13, or that your residual evaluation is no longer accurate below that. I would set the atol as you suggest. Matt > 0 SNES Function norm 5.142950291311e-10 > 0 KSP Residual norm 6.057087103783e-11 > 1 KSP Residual norm 1.681179391195e-26 > Line search: Using full step: fnorm 5.142950291311e-10 gnorm > 5.783398860650e-14 > 1 SNES Function norm 5.783398860650e-14 > 0 KSP Residual norm 5.520053977167e-15 > 1 KSP Residual norm 1.370372252609e-30 > Line search: gnorm after quadratic fit 5.728578676879e-14 > Line search: Quadratically determined step, > lambda=3.9611360239162957e-01 > 2 SNES Function norm 5.728578676879e-14 > 0 KSP Residual norm 5.024285935857e-15 > 1 KSP Residual norm 2.789038964144e-31 > Line search: gnorm after quadratic fit 4.278033777465e-14 > Line search: Quadratically determined step, > lambda=2.4691358024691357e-01 > 3 SNES Function norm 4.278033777465e-14 > 0 KSP Residual norm 3.520343148370e-15 > 1 KSP Residual norm 5.527264229234e-31 > Line search: gnorm after quadratic fit 2.842170943040e-14 > Line search: Quadratically determined step, > lambda=2.5438596491228038e-01 > 4 SNES Function norm 2.842170943040e-14 > 0 KSP Residual norm 2.016428211944e-15 > 1 KSP Residual norm 2.238685028403e-31 > Line search: gnorm after quadratic fit 5.695433295430e-14 > Line search: Cubic step no good, shrinking lambda, current gnorm > 4.278033777465e-14 lambda=1.0000000000000002e-02 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=1.0000000000000002e-03 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=5.0000000000000012e-04 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=2.1132486540518717e-04 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=9.2196144189362134e-05 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=4.0004514620095227e-05 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=1.7374756353482527e-05 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=7.5449506476837614e-06 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=3.2764733594125655e-06 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=1.4228354923470249e-06 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=6.1787855254724169e-07 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=2.6831903567985152e-07 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=1.1651983473611860e-07 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=5.0599733967314922e-08 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=2.1973366898757625e-08 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=9.5421223580158174e-09 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=4.1437481801087470e-09 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=1.7994580593128418e-09 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=7.8143004026450871e-10 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=3.3934267301617141e-10 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=1.4736245574944127e-10 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=6.3993405755577026e-11 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=2.7789683331288042e-11 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=1.2067907474762995e-11 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=5.2405919521750200e-12 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=2.2757718408626572e-12 > Line search: Cubic step no good, shrinking lambda, current gnorm > 2.842170943040e-14 lambda=9.8827337043745462e-13 > Line search: unable to find good step length! After 27 tries > Line search: fnorm=2.8421709430404007e-14, > gnorm=2.8421709430404007e-14, ynorm=2.0164282119435693e-15, > minlambda=9.9999999999999998e-13, lambda=9.8827337043745462e-13, initial > slope=-8.0779356694631465e-28 > > > > -gideon > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Mar 30 20:21:43 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 30 Mar 2017 20:21:43 -0500 Subject: [petsc-users] snes parameter suggestions In-Reply-To: <8D266108-EB18-4C68-B384-A5761A689F07@gmail.com> References: <8D266108-EB18-4C68-B384-A5761A689F07@gmail.com> Message-ID: <1EF2433B-0DC9-4408-9EE7-B29F9A5FAF8A@mcs.anl.gov> Developing solver algorithms and software is easy, developing robust convergence test algorithms is difficult. > On Mar 30, 2017, at 7:09 PM, Gideon Simpson wrote: > > About a month ago, I mentioned that I was trying to set up a projected integration scheme within petsc, where I use a classical integrator (i.e., RK4), at each time step, and then correct my prediction dependent variable, yp, by solving a nonlinear equation g(y + lambda * f(yp)) =0 for a scalar parameter lambda. Out of stubbornness, I did this entirely within the confines of petsc, using a SNES. Following up on a comment of Barry?s, about the solver taking an excessive number of function evaluations, I realized that, in fact, the SNES was failing to converge (algorithmically), even though it was giving reasonable answers. In particular, I see output like what is displayed below. > > I am using the default snes/ksp solvers with default tolerances. It would seem to me that I should have been quite happy after 1 SNES iteration, given that this is a scalar problem. This can obviously be done by setting the atol to something like 1e-12, but I was curious if people had other thoughts on this. > > > > 0 SNES Function norm 5.142950291311e-10 > 0 KSP Residual norm 6.057087103783e-11 > 1 KSP Residual norm 1.681179391195e-26 > Line search: Using full step: fnorm 5.142950291311e-10 gnorm 5.783398860650e-14 > 1 SNES Function norm 5.783398860650e-14 > 0 KSP Residual norm 5.520053977167e-15 > 1 KSP Residual norm 1.370372252609e-30 > Line search: gnorm after quadratic fit 5.728578676879e-14 > Line search: Quadratically determined step, lambda=3.9611360239162957e-01 > 2 SNES Function norm 5.728578676879e-14 > 0 KSP Residual norm 5.024285935857e-15 > 1 KSP Residual norm 2.789038964144e-31 > Line search: gnorm after quadratic fit 4.278033777465e-14 > Line search: Quadratically determined step, lambda=2.4691358024691357e-01 > 3 SNES Function norm 4.278033777465e-14 > 0 KSP Residual norm 3.520343148370e-15 > 1 KSP Residual norm 5.527264229234e-31 > Line search: gnorm after quadratic fit 2.842170943040e-14 > Line search: Quadratically determined step, lambda=2.5438596491228038e-01 > 4 SNES Function norm 2.842170943040e-14 > 0 KSP Residual norm 2.016428211944e-15 > 1 KSP Residual norm 2.238685028403e-31 > Line search: gnorm after quadratic fit 5.695433295430e-14 > Line search: Cubic step no good, shrinking lambda, current gnorm 4.278033777465e-14 lambda=1.0000000000000002e-02 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.0000000000000002e-03 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=5.0000000000000012e-04 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=2.1132486540518717e-04 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=9.2196144189362134e-05 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=4.0004514620095227e-05 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.7374756353482527e-05 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=7.5449506476837614e-06 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=3.2764733594125655e-06 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.4228354923470249e-06 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=6.1787855254724169e-07 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=2.6831903567985152e-07 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.1651983473611860e-07 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=5.0599733967314922e-08 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=2.1973366898757625e-08 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=9.5421223580158174e-09 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=4.1437481801087470e-09 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.7994580593128418e-09 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=7.8143004026450871e-10 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=3.3934267301617141e-10 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.4736245574944127e-10 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=6.3993405755577026e-11 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=2.7789683331288042e-11 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=1.2067907474762995e-11 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=5.2405919521750200e-12 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=2.2757718408626572e-12 > Line search: Cubic step no good, shrinking lambda, current gnorm 2.842170943040e-14 lambda=9.8827337043745462e-13 > Line search: unable to find good step length! After 27 tries > Line search: fnorm=2.8421709430404007e-14, gnorm=2.8421709430404007e-14, ynorm=2.0164282119435693e-15, minlambda=9.9999999999999998e-13, lambda=9.8827337043745462e-13, initial slope=-8.0779356694631465e-28 > > > > -gideon > From toon.weyens at gmail.com Fri Mar 31 09:45:00 2017 From: toon.weyens at gmail.com (Toon Weyens) Date: Fri, 31 Mar 2017 14:45:00 +0000 Subject: [petsc-users] Slepc JD and GD converge to wrong eigenpair In-Reply-To: References: <65A0A5E7-399B-4D19-A967-73765A96DB98@dsic.upv.es> <2A5BFE40-C401-42CA-944A-9008E57B55EB@dsic.upv.es> Message-ID: Dear both, I have recompiled slepc and petsc without debugging, as well as with the recommended --with-fortran-kernels=1. In the attachment I show the scaling for a typical "large" simulation with about 120 000 unkowns, using Krylov-Schur. There are two sets of datapoints there, as I do two EPS solves in one simulations. The second solve is faster as it results from a grid refinement of the first solve, and takes the solution of the first solve as a first, good guess. Note that there are two pages in the PDF and in the second page I show the time ? n_procs. As you can see, the scaling is better than before, especially up to 8 processes (which means about 15,000 unknowns per process, which is, as I recall, cited as a good minimum on the website. I am currently trying to run make streams NPMAX=8, but the cluster is extraordinarily crowded today and it does not like my interactive jobs. I will try to run them asap. The main issue now, however, is again the first issue: the Generalizeid Davidson method does not converge to the physically correct negative eigenvalue (it should be about -0.05 as Krylov-Schur gives me). In stead it stays stuck at some small positive eigenvalue of about +0.0002. It looks as if the solver really does not like passing the eigenvalue = 0 barrier, a behavior I also see in smaller simulations, where the convergence is greatly slowed down when crossing this. However, this time, for this big simulation, just increasing NCV does *not* do the trick, at least not until NCV=2048. Also, I tried to use target magnitude without success either. I started implementing the capability to start with Krylov-Schur and then switch to GD with EPSSetInitialSpace when a certain precision has been reached, but then realized it might be a bit of overkill as the SLEPC solution phase in my code is generally not more than 15% of the time. There are probably other places where I can gain more than a few percents. However, if there is another trick that can make GD to work, it would certainly be appreciated, as in my experience it is really about 5 times faster than Krylov-Schur! Thanks! Toon On Thu, Mar 30, 2017 at 2:47 PM Matthew Knepley wrote: > On Thu, Mar 30, 2017 at 3:05 AM, Jose E. Roman wrote: > > > > El 30 mar 2017, a las 9:27, Toon Weyens > escribi?: > > > > Hi, thanks for the answer. > > > > I use MUMPS as a PC. The options -ksp_converged_reason, > -ksp_monitor_true_residual and -ksp_view are not used. > > > > The difference between the log_view outputs of running a simple solution > with 1, 2, 3 or 4 MPI procs is attached (debug version). > > > > I can see that with 2 procs it takes about 22 seconds, versus 7 seconds > for 1 proc. For 3 and 4 the situation is worse: 29 and 37 seconds. > > > > Looks like the difference is mainly in the BVmult and especially in the > BVorthogonalize routines: > > > > BVmult takes 1, 6.5, 10 or even a whopping 17 seconds for the different > number of proceses > > BVorthogonalize takes 1, 4, 6, 10. > > > > Calculating the preconditioner does not take more time for different > number of proceses, and applying it only slightly increases. So it cannot > be mumps' fault... > > > > Does this makes sense? Is there any way to improve this? > > > > Thanks! > > Cannot trust performance data in a debug build: > > > Yes, you should definitely make another build configured using > --with-debugging=no. > > What do you get for STREAMS on this machine > > make streams NP=4 > > From this data, it looks like you have already saturated the bandwidth at > 2 procs. > > Thanks, > > Matt > > > > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run ./configure # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: time_SLEPC.pdf Type: application/pdf Size: 11716 bytes Desc: not available URL: From HBuesing at eonerc.rwth-aachen.de Fri Mar 31 10:09:25 2017 From: HBuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Fri, 31 Mar 2017 15:09:25 +0000 Subject: [petsc-users] Adaptive mesh refinement for transient problem Message-ID: Dear all, I was wondering if it is possible to integrate adaptive mesh refinement into my code (transient, cell centered on a regular grid). I was looking at DMForest, but I'm unsure how much effort is needed to include this. What would I need to do? Write some indicator functions to mark cells for refinement/coarsening. Prolongation and restriction operators should be just a matter of averaging. But what about the hanging nodes? The two-point flux approximation I'm using would give wrong results here... Thank you! Henrik -- Dipl.-Math. Henrik B?sing Institute for Applied Geophysics and Geothermal Energy E.ON Energy Research Center RWTH Aachen University ------------------------------------------------------ Mathieustr. 10 | Tel +49 (0)241 80 49907 52074 Aachen, Germany | Fax +49 (0)241 80 49889 ------------------------------------------------------ http://www.eonerc.rwth-aachen.de/GGE hbuesing at eonerc.rwth-aachen.de ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Fri Mar 31 10:00:25 2017 From: jed at jedbrown.org (Jed Brown) Date: Fri, 31 Mar 2017 09:00:25 -0600 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: References: Message-ID: <87y3vlmqye.fsf@jedbrown.org> Justin Chang writes: > Yeah based on my experiments it seems setting pc_mg_levels to $DAREFINE + 1 > has decent performance. > > 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In some of > the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/ > petsc/documentation/tutorials/TutorialCEMRACS2016.pdf on slide 203/227) > they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran this, it > was almost twice as slow as if $MGLEVELS >= $DAREFINE Smaller coarse grids are generally more scalable -- when the problem data is distributed, multigrid is a good solution algorithm. But if multigrid stops being effective because it is not preserving sufficient coarse grid accuracy (e.g., for transport-dominated problems in complicated domains) then you might want to stop early and use a more robust method (like direct solves). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From jroman at dsic.upv.es Fri Mar 31 10:12:07 2017 From: jroman at dsic.upv.es (Jose E. Roman) Date: Fri, 31 Mar 2017 17:12:07 +0200 Subject: [petsc-users] Slepc JD and GD converge to wrong eigenpair In-Reply-To: References: <65A0A5E7-399B-4D19-A967-73765A96DB98@dsic.upv.es> <2A5BFE40-C401-42CA-944A-9008E57B55EB@dsic.upv.es> Message-ID: In order to answer about GD I would need to know all the settings you are using. Also if you could send me the matrix I could do some tests. GD and JD are preconditioned eigensolvers, which need a reasonably good preconditioner. But MUMPS is a direct solver, not a preconditioner, and that is often counterproductive in this kind of methods. Jose > El 31 mar 2017, a las 16:45, Toon Weyens escribi?: > > Dear both, > > I have recompiled slepc and petsc without debugging, as well as with the recommended --with-fortran-kernels=1. In the attachment I show the scaling for a typical "large" simulation with about 120 000 unkowns, using Krylov-Schur. > > There are two sets of datapoints there, as I do two EPS solves in one simulations. The second solve is faster as it results from a grid refinement of the first solve, and takes the solution of the first solve as a first, good guess. Note that there are two pages in the PDF and in the second page I show the time ? n_procs. > > As you can see, the scaling is better than before, especially up to 8 processes (which means about 15,000 unknowns per process, which is, as I recall, cited as a good minimum on the website. > > I am currently trying to run make streams NPMAX=8, but the cluster is extraordinarily crowded today and it does not like my interactive jobs. I will try to run them asap. > > The main issue now, however, is again the first issue: the Generalizeid Davidson method does not converge to the physically correct negative eigenvalue (it should be about -0.05 as Krylov-Schur gives me). In stead it stays stuck at some small positive eigenvalue of about +0.0002. It looks as if the solver really does not like passing the eigenvalue = 0 barrier, a behavior I also see in smaller simulations, where the convergence is greatly slowed down when crossing this. > > However, this time, for this big simulation, just increasing NCV does not do the trick, at least not until NCV=2048. > > Also, I tried to use target magnitude without success either. > > I started implementing the capability to start with Krylov-Schur and then switch to GD with EPSSetInitialSpace when a certain precision has been reached, but then realized it might be a bit of overkill as the SLEPC solution phase in my code is generally not more than 15% of the time. There are probably other places where I can gain more than a few percents. > > However, if there is another trick that can make GD to work, it would certainly be appreciated, as in my experience it is really about 5 times faster than Krylov-Schur! > > Thanks! > > Toon From knepley at gmail.com Fri Mar 31 10:15:19 2017 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 31 Mar 2017 10:15:19 -0500 Subject: [petsc-users] Adaptive mesh refinement for transient problem In-Reply-To: References: Message-ID: On Fri, Mar 31, 2017 at 10:09 AM, Buesing, Henrik < HBuesing at eonerc.rwth-aachen.de> wrote: > Dear all, > > > > I was wondering if it is possible to integrate adaptive mesh refinement > into my code (transient, cell centered on a regular grid). I was looking at > DMForest, but I?m unsure how much effort is needed to include this. > The honest answer is, a lot of effort right now. Just using a grid you know you want is not that hard. However, incorporating a real refinement loop is not worked out yet. You can see how much code we had to use in TS ex11. Toby and I hope to have a much more user-friendly interface soon (meaning by the end of summer). You can either wait for that, or help us get it up and going :) > What would I need to do? Write some indicator functions to mark cells for > refinement/coarsening. Prolongation and restriction operators should be > just a matter of averaging. > > But what about the hanging nodes? The two-point flux approximation I?m > using would give wrong results here? > 1) If you have an idea what the indicator function should be, great! We can do things just like TS ex11. 2) Toby has already coded up a consistent interpolation for FV on non-conforming grids. Yes 2-point flux is a problem. In TS ex11 we use a pointwise Riemann solver (I think that is the right term). Jed and Toby understand these things better than I do. Thanks, Matt Thank you! > Henrik > > > > -- > > Dipl.-Math. Henrik B?sing > > Institute for Applied Geophysics and Geothermal Energy > > E.ON Energy Research Center > > RWTH Aachen University > > ------------------------------------------------------ > > Mathieustr. 10 | Tel +49 (0)241 80 49907 > <+49%20241%208049907> > > 52074 Aachen, Germany | Fax +49 (0)241 80 49889 > <+49%20241%208049889> > > ------------------------------------------------------ > > http://www.eonerc.rwth-aachen.de/GGE > > hbuesing at eonerc.rwth-aachen.de > > ------------------------------------------------------ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From ling.zou at inl.gov Fri Mar 31 10:28:48 2017 From: ling.zou at inl.gov (Zou, Ling) Date: Fri, 31 Mar 2017 09:28:48 -0600 Subject: [petsc-users] Proper way to abort/restart SNESSolve due to exceptions Message-ID: Hi All, I have done some researching in the PETSc email archive, but did not find a decent way to do it. Assume that, during a SNESSolve, something unphysical happening and a C++ exception is send, and now I want to stop the SNESSolve and let it try a smaller time step. Here are some pseudo code I have: while(no_converged) { try { SNESSolve(...); if (SNESSolve converged) no_converged = false; } catch(int err) { /* do some clean work here? */ dt = 0.5 * dt; } } It seems to me that it is not a good way to do, as if an exception is thrown during SNESSolve, the code go to the error catching, changing time step, and *immediately* do SNESSolve again. I would expect that there should be some clean work needed before another SNESSolve call, as I commented in the code. I found two related email threads: http://lists.mcs.anl.gov/pipermail/petsc-users/2014-August/022597.html http://lists.mcs.anl.gov/pipermail/petsc-users/2015-February/024367.html But I don't see a clear answer there. Any comments on this issue? Best, Ling -------------- next part -------------- An HTML attachment was scrubbed... URL: From HBuesing at eonerc.rwth-aachen.de Fri Mar 31 11:34:19 2017 From: HBuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Fri, 31 Mar 2017 16:34:19 +0000 Subject: [petsc-users] Adaptive mesh refinement for transient problem In-Reply-To: References: Message-ID: Von: Matthew Knepley [mailto:knepley at gmail.com] Gesendet: Freitag, 31. M?rz 2017 17:15 An: Buesing, Henrik Cc: petsc-users Betreff: Re: [petsc-users] Adaptive mesh refinement for transient problem On Fri, Mar 31, 2017 at 10:09 AM, Buesing, Henrik > wrote: Dear all, I was wondering if it is possible to integrate adaptive mesh refinement into my code (transient, cell centered on a regular grid). I was looking at DMForest, but I?m unsure how much effort is needed to include this. The honest answer is, a lot of effort right now. Just using a grid you know you want is not that hard. However, incorporating a real refinement loop is not worked out yet. You can see how much code we had to use in TS ex11. [Buesing, Henrik] I see. Had a look at TS ex11 and even excluding the Riemann solver (assuming I can interpolate the unknowns at the hanging nodes) it seems not totally trivial. Toby and I hope to have a much more user-friendly interface soon (meaning by the end of summer). You can either wait for that, or help us get it up and going :) [Buesing, Henrik] Let me finish my Ph.D. and I will come back to you for testing at the end of summer. :) ? Thank you! Henrik Thanks, Matt Thank you! Henrik -- Dipl.-Math. Henrik B?sing Institute for Applied Geophysics and Geothermal Energy E.ON Energy Research Center RWTH Aachen University ------------------------------------------------------ Mathieustr. 10 | Tel +49 (0)241 80 49907 52074 Aachen, Germany | Fax +49 (0)241 80 49889 ------------------------------------------------------ http://www.eonerc.rwth-aachen.de/GGE hbuesing at eonerc.rwth-aachen.de ------------------------------------------------------ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Mar 31 12:47:24 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 31 Mar 2017 12:47:24 -0500 Subject: [petsc-users] Correlation between da_refine and pg_mg_levels In-Reply-To: <87y3vlmqye.fsf@jedbrown.org> References: <87y3vlmqye.fsf@jedbrown.org> Message-ID: > On Mar 31, 2017, at 10:00 AM, Jed Brown wrote: > > Justin Chang writes: > >> Yeah based on my experiments it seems setting pc_mg_levels to $DAREFINE + 1 >> has decent performance. >> >> 1) is there ever a case where you'd want $MGLEVELS <= $DAREFINE? In some of >> the PETSc tutorial slides (e.g., http://www.mcs.anl.gov/ >> petsc/documentation/tutorials/TutorialCEMRACS2016.pdf on slide 203/227) >> they say to use $MGLEVELS = 4 and $DAREFINE = 5, but when I ran this, it >> was almost twice as slow as if $MGLEVELS >= $DAREFINE > > Smaller coarse grids are generally more scalable -- when the problem > data is distributed, multigrid is a good solution algorithm. But if > multigrid stops being effective because it is not preserving sufficient > coarse grid accuracy (e.g., for transport-dominated problems in > complicated domains) then you might want to stop early and use a more > robust method (like direct solves). Basically for symmetric positive definite operators you can make the coarse problem as small as you like (even 1 point) in theory. For indefinite and non-symmetric problems the theory says the "coarse grid must be sufficiently fine" (loosely speaking the coarse grid has to resolve the eigenmodes for the eigenvalues to the left of the x = 0). https://www.jstor.org/stable/2158375?seq=1#page_scan_tab_contents From bsmith at mcs.anl.gov Fri Mar 31 16:45:36 2017 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 31 Mar 2017 16:45:36 -0500 Subject: [petsc-users] Proper way to abort/restart SNESSolve due to exceptions In-Reply-To: References: Message-ID: PETSc doesn't use C++ exceptions. If a catastrophic unrecoverable error occurs each PETSc routine returns a nonzero error code. All the application can do in that case is end. If a solver does not converge then PETSc does not use a nonzero error code, instead you obtain information from calls to SNESGetConvergedReason() (or KSPGetConvergedReason()) to determine if there was a convergence failure or not. If there was a lack of convergence PETSc solvers still remain in a valid state and there is no "cleanup" on the solver objects needed. We have improved the PETSc handling of failed function evaluations, failed linear solvers etc in the past year so you MUST use the master branch of the PETSc repository and not the release version. So your code can look like SNESSolve() SNESGetConvergedReason(snes,&reason); if (reason < 0) dt = .5*dt; else no_converged = false; Note that the PETSc TS time-stepping object already manages stuff like this as well as providing local error estimate controls so it is better to use the PETSc TS rather than using SNES and having to manage all the time-stepping yourself. Barry > On Mar 31, 2017, at 10:28 AM, Zou, Ling wrote: > > Hi All, > > I have done some researching in the PETSc email archive, but did not find a decent way to do it. Assume that, during a SNESSolve, something unphysical happening and a C++ exception is send, and now I want to stop the SNESSolve and let it try a smaller time step. > Here are some pseudo code I have: > > while(no_converged) > { > try > { > SNESSolve(...); > if (SNESSolve converged) > no_converged = false; > } > catch(int err) > { > /* do some clean work here? */ > dt = 0.5 * dt; > } > } > > It seems to me that it is not a good way to do, as if an exception is thrown during SNESSolve, the code go to the error catching, changing time step, and immediately do SNESSolve again. I would expect that there should be some clean work needed before another SNESSolve call, as I commented in the code. > > I found two related email threads: > http://lists.mcs.anl.gov/pipermail/petsc-users/2014-August/022597.html > http://lists.mcs.anl.gov/pipermail/petsc-users/2015-February/024367.html > > But I don't see a clear answer there. > Any comments on this issue? > > Best, > > Ling From toon.weyens at gmail.com Fri Mar 31 17:01:48 2017 From: toon.weyens at gmail.com (Toon Weyens) Date: Fri, 31 Mar 2017 22:01:48 +0000 Subject: [petsc-users] Slepc JD and GD converge to wrong eigenpair In-Reply-To: References: <65A0A5E7-399B-4D19-A967-73765A96DB98@dsic.upv.es> <2A5BFE40-C401-42CA-944A-9008E57B55EB@dsic.upv.es> Message-ID: Dear jose, I have saved the matrices in Matlab format and am sending them to you using pCloud. If you want another format, please tell me. Please also note that they are about 1.4GB each. I also attach a typical output of eps_view and log_view in output.txt, for 8 processes. Thanks so much for helping me out! I think Petsc and Slepc are amazing inventions that really have saved me many months of work! Regards On Fri, Mar 31, 2017 at 5:12 PM Jose E. Roman wrote: In order to answer about GD I would need to know all the settings you are using. Also if you could send me the matrix I could do some tests. GD and JD are preconditioned eigensolvers, which need a reasonably good preconditioner. But MUMPS is a direct solver, not a preconditioner, and that is often counterproductive in this kind of methods. Jose > El 31 mar 2017, a las 16:45, Toon Weyens escribi?: > > Dear both, > > I have recompiled slepc and petsc without debugging, as well as with the recommended --with-fortran-kernels=1. In the attachment I show the scaling for a typical "large" simulation with about 120 000 unkowns, using Krylov-Schur. > > There are two sets of datapoints there, as I do two EPS solves in one simulations. The second solve is faster as it results from a grid refinement of the first solve, and takes the solution of the first solve as a first, good guess. Note that there are two pages in the PDF and in the second page I show the time ? n_procs. > > As you can see, the scaling is better than before, especially up to 8 processes (which means about 15,000 unknowns per process, which is, as I recall, cited as a good minimum on the website. > > I am currently trying to run make streams NPMAX=8, but the cluster is extraordinarily crowded today and it does not like my interactive jobs. I will try to run them asap. > > The main issue now, however, is again the first issue: the Generalizeid Davidson method does not converge to the physically correct negative eigenvalue (it should be about -0.05 as Krylov-Schur gives me). In stead it stays stuck at some small positive eigenvalue of about +0.0002. It looks as if the solver really does not like passing the eigenvalue = 0 barrier, a behavior I also see in smaller simulations, where the convergence is greatly slowed down when crossing this. > > However, this time, for this big simulation, just increasing NCV does not do the trick, at least not until NCV=2048. > > Also, I tried to use target magnitude without success either. > > I started implementing the capability to start with Krylov-Schur and then switch to GD with EPSSetInitialSpace when a certain precision has been reached, but then realized it might be a bit of overkill as the SLEPC solution phase in my code is generally not more than 15% of the time. There are probably other places where I can gain more than a few percents. > > However, if there is another trick that can make GD to work, it would certainly be appreciated, as in my experience it is really about 5 times faster than Krylov-Schur! > > Thanks! > > Toon -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- EPS Object: 8 MPI processes type: krylovschur Krylov-Schur: 50% of basis vectors kept after restart Krylov-Schur: using the locking variant problem type: generalized non-hermitian eigenvalue problem selected portion of the spectrum: smallest real parts number of eigenvalues (nev): 1 number of column vectors (ncv): 32 maximum dimension of projected problem (mpd): 32 maximum number of iterations: 5000 tolerance: 1e-05 convergence test: relative to the eigenvalue BV Object: 8 MPI processes type: svec 33 columns of global length 60000 vector orthogonalization method: classical Gram-Schmidt orthogonalization refinement: if needed (eta: 0.7071) block orthogonalization method: Gram-Schmidt doing matmult as a single matrix-matrix product DS Object: 8 MPI processes type: nhep ST Object: 8 MPI processes type: shift shift: 0 number of matrices: 2 all matrices have different nonzero pattern KSP Object: (st_) 8 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-08, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (st_) 8 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: natural factor fill ratio given 0, needed 0 Factored matrix follows: Mat Object: 8 MPI processes type: mpibaij rows=60000, cols=60000, bs=30 package used to perform factorization: mumps total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 MUMPS run parameters: SYM (matrix type): 0 PAR (host participation): 1 ICNTL(1) (output for error): 6 ICNTL(2) (output of diagnostic msg): 0 ICNTL(3) (output for global info): 0 ICNTL(4) (level of printing): 0 ICNTL(5) (input mat struct): 0 ICNTL(6) (matrix prescaling): 7 ICNTL(7) (sequentia matrix ordering):7 ICNTL(8) (scalling strategy): 77 ICNTL(10) (max num of refinements): 0 ICNTL(11) (error analysis): 0 ICNTL(12) (efficiency control): 1 ICNTL(13) (efficiency control): 0 ICNTL(14) (percentage of estimated workspace increase): 30 ICNTL(18) (input mat struct): 3 ICNTL(19) (Shur complement info): 0 ICNTL(20) (rhs sparse pattern): 0 ICNTL(21) (solution struct): 1 ICNTL(22) (in-core/out-of-core facility): 0 ICNTL(23) (max size of memory can be allocated locally):0 ICNTL(24) (detection of null pivot rows): 0 ICNTL(25) (computation of a null space basis): 0 ICNTL(26) (Schur options for rhs or solution): 0 ICNTL(27) (experimental parameter): -24 ICNTL(28) (use parallel or sequential ordering): 1 ICNTL(29) (parallel ordering): 0 ICNTL(30) (user-specified set of entries in inv(A)): 0 ICNTL(31) (factors is discarded in the solve phase): 0 ICNTL(33) (compute determinant): 0 CNTL(1) (relative pivoting threshold): 0.01 CNTL(2) (stopping criterion of refinement): 1.49012e-08 CNTL(3) (absolute pivoting threshold): 0 CNTL(4) (value of static pivoting): -1 CNTL(5) (fixation for null pivots): 0 RINFO(1) (local estimated flops for the elimination after analysis): [0] 3.77789e+09 [1] 4.28195e+09 [2] 4.42748e+09 [3] 4.448e+09 [4] 4.37421e+09 [5] 4.37421e+09 [6] 4.37421e+09 [7] 3.80882e+09 RINFO(2) (local estimated flops for the assembly after factorization): [0] 3.3372e+06 [1] 2.3328e+06 [2] 2.8512e+06 [3] 2.8512e+06 [4] 2.592e+06 [5] 2.592e+06 [6] 2.592e+06 [7] 3.564e+06 RINFO(3) (local estimated flops for the elimination after factorization): [0] 3.77789e+09 [1] 4.28195e+09 [2] 4.42748e+09 [3] 4.448e+09 [4] 4.37421e+09 [5] 4.37421e+09 [6] 4.37421e+09 [7] 3.80882e+09 INFO(15) (estimated size of (in MB) MUMPS internal data for running numerical factorization): [0] 296 [1] 292 [2] 303 [3] 304 [4] 297 [5] 297 [6] 297 [7] 297 INFO(16) (size of (in MB) MUMPS internal data used during numerical factorization): [0] 296 [1] 292 [2] 303 [3] 304 [4] 297 [5] 297 [6] 297 [7] 297 INFO(23) (num of pivots eliminated on this processor after factorization): [0] 8460 [1] 6960 [2] 7320 [3] 7350 [4] 7170 [5] 7170 [6] 7170 [7] 8400 RINFOG(1) (global estimated flops for the elimination after analysis): 3.38668e+10 RINFOG(2) (global estimated flops for the assembly after factorization): 2.27124e+07 RINFOG(3) (global estimated flops for the elimination after factorization): 3.38668e+10 (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0,0)*(2^0) INFOG(3) (estimated real workspace for factors on all processors after analysis): 61590600 INFOG(4) (estimated integer workspace for factors on all processors after analysis): 257430 INFOG(5) (estimated maximum front size in the complete tree): 750 INFOG(6) (number of nodes in the complete tree): 210 INFOG(7) (ordering option effectively use after analysis): 5 INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 100 INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 61590600 INFOG(10) (total integer space store the matrix factors after factorization): 257430 INFOG(11) (order of largest frontal matrix after factorization): 750 INFOG(12) (number of off-diagonal pivots): 0 INFOG(13) (number of delayed pivots after factorization): 0 INFOG(14) (number of memory compress after factorization): 0 INFOG(15) (number of steps of iterative refinement after solution): 0 INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 304 INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 2383 INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 304 INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 2383 INFOG(20) (estimated number of entries in the factors): 61590600 INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 255 INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 2018 INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0 INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1 INFOG(25) (after factorization: number of pivots modified by static pivoting): 0 INFOG(28) (after factorization: number of null pivots encountered): 0 INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 61590600 INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 194, 1499 INFOG(32) (after analysis: type of analysis done): 1 INFOG(33) (value used for ICNTL(8)): 7 INFOG(34) (exponent of the determinant if determinant is requested): 0 linear system matrix = precond matrix: Mat Object: 8 MPI processes type: mpibaij rows=60000, cols=60000, bs=30 total: nonzeros=2.33622e+07, allocated nonzeros=2.33622e+07 total number of mallocs used during MatSetValues calls =0 block size is 30 11:34:22: Summarize solution 11:34:22: krylovschur solver with tolerance 1.00E-05 and maximum 5000 iterations 11:34:22: number of iterations: 147 11:34:22: number of converged solutions: 1 11:34:22: number of requested solutions: 1 11:34:22: maximum dimension of the subspace to be used by solver: 32 11:34:22: maximum dimension allowed for projected problem : 32 11:34:22: Store results for 1 least stable Eigenvalues 11:34:22: Checking whether A x - omega^2 B x = 0 for EV 1: -4.81E-03 - 5.29E-10 i 11:34:22: X*(A-omega^2B)X = 2.48E-02 + 1.19E-02 i 11:34:22: error: 1.0127729749385715E+03, given: 1.3104261036323470E+02, estimate: 9.1902202228962019E-06 11:34:23: step_size = 1.6903712754623413E-05 11:34:23: E_pot = X*AX*step_size = -1.8249444573959570E+00 - 6.9666241470887725E-21 11:34:23: E_kin = X*BX*step_size = 3.7949522108799880E+02 - 1.3572640832222037E-19 11:34:23: X*AX/X*BX = -4.8088733559382077E-03 - 2.0077499879767631E-23 11:34:23: omega^2 = -4.8088744602323725E-03 - 5.2905885580072376E-10 11:34:23: 1 Eigenvalues were written in the file "PB3D_out_EV_R_1.txt" 11:34:23: basic statistics: 11:34:23: min: -4.81E-03 - 5.29E-10 i 11:34:23: max: -4.81E-03 - 5.29E-10 i 11:34:23: Finalize SLEPC ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./PB3D on a arch-linux2-c-opt named c04b03 with 8 processors, by weyenst Thu Mar 30 11:34:23 2017 Using Petsc Release Version 3.6.4, Apr, 12, 2016 Max Max/Min Avg Total Time (sec): 1.650e+02 1.00006 1.650e+02 Objects: 6.300e+01 1.00000 6.300e+01 Flops: 7.411e+10 1.00487 7.402e+10 5.921e+11 Flops/sec: 4.491e+08 1.00493 4.485e+08 3.588e+09 MPI Messages: 9.577e+03 1.14929 8.931e+03 7.145e+04 MPI Message Lengths: 4.495e+08 1.05554 4.870e+04 3.479e+09 MPI Reductions: 7.146e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.6502e+02 100.0% 5.9214e+11 100.0% 7.145e+04 100.0% 4.870e+04 100.0% 7.145e+03 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 2373 1.0 3.4775e+01 2.1 5.55e+10 1.0 3.3e+04 1.5e+03 0.0e+00 18 75 46 1 0 18 75 46 1 0 12737 MatSolve 2368 1.0 1.3440e+02 1.2 0.00e+00 0.0 3.8e+04 9.0e+04 2.4e+03 73 0 53 98 33 73 0 53 98 33 0 MatLUFactorSym 1 1.0 9.1249e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatLUFactorNum 1 1.0 3.6064e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 MatCopy 1 1.0 4.0790e-02 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatConvert 2 1.0 6.2410e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 7 1.0 1.0136e-0112.2 0.00e+00 0.0 8.4e+01 2.3e+05 2.1e+01 0 0 0 1 0 0 0 0 1 0 0 MatAssemblyEnd 7 1.0 6.0353e-03 1.4 0.00e+00 0.0 5.6e+01 1.4e+01 1.2e+01 0 0 0 0 0 0 0 0 0 0 0 MatGetRow 15000 1.0 5.6869e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 2 1.0 3.4671e-03 2.5 0.00e+00 0.0 1.7e+02 2.1e+03 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 1 1.0 3.2246e-01 1.1 0.00e+00 0.0 2.8e+01 1.4e+01 1.1e+01 0 0 0 0 0 0 0 0 0 0 0 VecDot 3 1.0 2.9414e-02103.9 1.80e+05 1.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 49 VecNorm 2 1.0 1.6545e-02180.2 1.20e+05 1.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 58 VecCopy 150 1.0 1.2825e-02 7.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 2373 1.0 8.1837e-02 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 1 1.0 1.5283e-04 2.5 6.00e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3141 VecScatterBegin 7109 1.0 2.0114e+0141.2 0.00e+00 0.0 7.1e+04 4.9e+04 2.4e+03 3 0100 99 33 3 0100 99 33 0 VecScatterEnd 4741 1.0 6.6763e-01 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 EPSSetUp 1 1.0 4.5218e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.7e+01 3 0 0 0 0 3 0 0 0 0 0 EPSSolve 1 1.0 1.6407e+02 1.0 7.40e+10 1.0 7.1e+04 4.9e+04 7.1e+03 99100 99 99 99 99100 99 99 99 3603 STSetUp 1 1.0 4.5194e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 9.0e+00 3 0 0 0 0 3 0 0 0 0 0 STApply 2368 1.0 1.5129e+02 1.0 5.53e+10 1.0 7.1e+04 4.9e+04 2.4e+03 91 75 99 99 33 91 75 99 99 33 2922 STMatSolve 2368 1.0 1.3450e+02 1.2 0.00e+00 0.0 3.8e+04 9.0e+04 2.4e+03 73 0 53 98 33 73 0 53 98 33 0 BVCopy 149 1.0 1.3258e-02 6.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 BVMult 4859 1.0 4.9419e+00 3.8 1.14e+10 1.0 0.0e+00 0.0e+00 0.0e+00 2 15 0 0 0 2 15 0 0 0 18441 BVDot 4711 1.0 6.8612e+00 1.5 7.19e+09 1.0 0.0e+00 0.0e+00 4.7e+03 3 10 0 0 66 3 10 0 0 66 8380 BVOrthogonalize 2369 1.0 9.3547e+00 1.2 1.41e+10 1.0 0.0e+00 0.0e+00 4.7e+03 5 19 0 0 66 5 19 0 0 66 12051 BVScale 2369 1.0 2.4911e-02 1.3 7.11e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 22824 BVSetRandom 1 1.0 1.6389e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSSolve 147 1.0 9.8380e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSVectors 149 1.0 2.5978e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DSOther 147 1.0 2.1844e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSetUp 1 1.0 2.8610e-06 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 2368 1.0 1.3447e+02 1.2 0.00e+00 0.0 3.8e+04 9.0e+04 2.4e+03 73 0 53 98 33 73 0 53 98 33 0 PCSetUp 1 1.0 4.5193e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 7.0e+00 3 0 0 0 0 3 0 0 0 0 0 PCApply 2368 1.0 1.3440e+02 1.2 0.00e+00 0.0 3.8e+04 9.0e+04 2.4e+03 73 0 53 98 33 73 0 53 98 33 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 17 17 186274616 0 Vector 22 21 6044400 0 Vector Scatter 6 6 5968 0 Index Set 9 9 40848 0 EPS Solver 1 1 3620 0 PetscRandom 1 1 632 0 Spectral Transform 1 1 816 0 Viewer 1 0 0 0 Basis Vectors 1 1 18456 0 Region 1 1 640 0 Direct Solver 1 1 72104 0 Krylov Solver 1 1 1136 0 Preconditioner 1 1 984 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 5.00679e-06 Average time for zero size MPI_Send(): 1.2219e-06 #PETSc Option Table entries: -eps_monitor -eps_ncv 32 -eps_type krylovschur -eps_view -log_view -st_pc_factor_mat_solver_package mumps -st_pc_type lu #End of PETSc Option Table entries Compiled with FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 4 Configure options: --prefix=/home/ITER/weyenst/Compiled_MVAPICH2 --with-scalar-type=complex --with-shared-libraries=0 --download-mumps --download-metis --download-parmetis --download-scalapack --with-blas-lapack-dir=/home/ITER/weyenst/Compiled_MVAPICH2/lib --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-fortran-kernels=1 --with-debugging=no ----------------------------------------- Libraries compiled on Thu Mar 30 09:25:32 2017 on hpc-app1.iter.org Machine characteristics: Linux-2.6.18-406.el5-x86_64-with-redhat-5.11-Final Using PETSc directory: /home/ITER/weyenst/Programs_MVAPICH2/petsc-3.6.4 Using PETSc arch: arch-linux2-c-opt ----------------------------------------- Using C compiler: mpicc -wd1572 -O3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpif90 -O3 ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/ITER/weyenst/Programs_MVAPICH2/petsc-3.6.4/arch-linux2-c-opt/include -I/home/ITER/weyenst/Programs_MVAPICH2/petsc-3.6.4/include -I/home/ITER/weyenst/Programs_MVAPICH2/petsc-3.6.4/include -I/home/ITER/weyenst/Programs_MVAPICH2/petsc-3.6.4/arch-linux2-c-opt/include -I/home/ITER/weyenst/Compiled_MVAPICH2/include -I/shared/hpc/mpi/mvapich2-intel/include ----------------------------------------- Using C linker: mpicc Using Fortran linker: mpif90 Using libraries: -Wl,-rpath,/home/ITER/weyenst/Programs_MVAPICH2/petsc-3.6.4/arch-linux2-c-opt/lib -L/home/ITER/weyenst/Programs_MVAPICH2/petsc-3.6.4/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/ITER/weyenst/Compiled_MVAPICH2/lib -L/home/ITER/weyenst/Compiled_MVAPICH2/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lparmetis -lmetis -lssl -lcrypto -lX11 -L/shared/hpc/mpi/mvapich2-intel/lib -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/ipp/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/mkl/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -lmpichf90 -lifport -lifcore -lm -lmpichcxx -ldl -L/shared/hpc/mpi/mvapich2-intel/lib -L/shared/hpc/mpi/mvapich2-intel/lib -lmpich -lopa -lpthread -libverbs -libumad -lrt -L/shared/hpc/mpi/mvapich2-intel/lib -L/shared/hpc/mpi/mvapich2-intel/lib -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/ipp/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/mkl/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/ipp/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/mkl/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -limf -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -L/shared/hpc/mpi/mvapich2-intel/lib -L/shared/hpc/mpi/mvapich2-intel/lib -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/ipp/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/mkl/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/compiler/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/ipp/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/mkl/lib/intel64 -L/shared/hpc/compiler/intel/composerxe-2011.2.137/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21 -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -ldl ----------------------------------------- From toon.weyens at gmail.com Fri Mar 31 17:03:22 2017 From: toon.weyens at gmail.com (Toon Weyens) Date: Fri, 31 Mar 2017 22:03:22 +0000 Subject: [petsc-users] Slepc JD and GD converge to wrong eigenpair In-Reply-To: References: <65A0A5E7-399B-4D19-A967-73765A96DB98@dsic.upv.es> <2A5BFE40-C401-42CA-944A-9008E57B55EB@dsic.upv.es> Message-ID: Sorry, I forgot to add the download link for the matrix files: https://transfer.pcloud.com/download.html?code=5ZViHIZI96yPIODHYSZ7y1HZMloBfcyhAHunjQVMpWUJIykLt76k Thanks On Sat, Apr 1, 2017 at 12:01 AM Toon Weyens wrote: > Dear jose, > > I have saved the matrices in Matlab format and am sending them to you > using pCloud. If you want another format, please tell me. Please also note > that they are about 1.4GB each. > > I also attach a typical output of eps_view and log_view in output.txt, for > 8 processes. > > Thanks so much for helping me out! I think Petsc and Slepc are amazing > inventions that really have saved me many months of work! > > Regards > > On Fri, Mar 31, 2017 at 5:12 PM Jose E. Roman wrote: > > In order to answer about GD I would need to know all the settings you are > using. Also if you could send me the matrix I could do some tests. > GD and JD are preconditioned eigensolvers, which need a reasonably good > preconditioner. But MUMPS is a direct solver, not a preconditioner, and > that is often counterproductive in this kind of methods. > Jose > > > > El 31 mar 2017, a las 16:45, Toon Weyens > escribi?: > > > > Dear both, > > > > I have recompiled slepc and petsc without debugging, as well as with the > recommended --with-fortran-kernels=1. In the attachment I show the scaling > for a typical "large" simulation with about 120 000 unkowns, using > Krylov-Schur. > > > > There are two sets of datapoints there, as I do two EPS solves in one > simulations. The second solve is faster as it results from a grid > refinement of the first solve, and takes the solution of the first solve as > a first, good guess. Note that there are two pages in the PDF and in the > second page I show the time ? n_procs. > > > > As you can see, the scaling is better than before, especially up to 8 > processes (which means about 15,000 unknowns per process, which is, as I > recall, cited as a good minimum on the website. > > > > I am currently trying to run make streams NPMAX=8, but the cluster is > extraordinarily crowded today and it does not like my interactive jobs. I > will try to run them asap. > > > > The main issue now, however, is again the first issue: the Generalizeid > Davidson method does not converge to the physically correct negative > eigenvalue (it should be about -0.05 as Krylov-Schur gives me). In stead it > stays stuck at some small positive eigenvalue of about +0.0002. It looks as > if the solver really does not like passing the eigenvalue = 0 barrier, a > behavior I also see in smaller simulations, where the convergence is > greatly slowed down when crossing this. > > > > However, this time, for this big simulation, just increasing NCV does > not do the trick, at least not until NCV=2048. > > > > Also, I tried to use target magnitude without success either. > > > > I started implementing the capability to start with Krylov-Schur and > then switch to GD with EPSSetInitialSpace when a certain precision has been > reached, but then realized it might be a bit of overkill as the SLEPC > solution phase in my code is generally not more than 15% of the time. There > are probably other places where I can gain more than a few percents. > > > > However, if there is another trick that can make GD to work, it would > certainly be appreciated, as in my experience it is really about 5 times > faster than Krylov-Schur! > > > > Thanks! > > > > Toon > > -------------- next part -------------- An HTML attachment was scrubbed... URL: