[petsc-users] How to dump and load a petsc object in python?
Florian Lindner
mailinglists at xgm.de
Tue Nov 8 03:09:16 CST 2016
Hey,
Am 08.11.2016 um 09:57 schrieb Ji Zhang:
> Dear all,
>
> I'm a petsc and petsc4py user. I want to save some petsc matrices and vectors after calculation, and reload them later.
> However, it is difficult for me. I think one of the following two ways may have the potential to perform what I want.
>
> 1) Using the pickle (a python module), I can not load the dates after dump them, and the error message is limited. I
> don't know why. The related codes and error messages are presented latter.
I doubt pickle is able to serialize the underlying datastructures of the python petsc objects. Afterall, they are only
interfaces to the C stuff.
> 2) Assuming I have a matrix named mat1, and save it to a binary file using petsc4py.PETSc.Viewer.createBinary() method,
> how can I reload the date and set to another matrix mat2 ?
I load a matrix I saved in binary from C code using:
C = PETSc.Mat()
C.create()
C.setType(PETSc.Mat.Type.SBAIJ)
C.load(PETSc.Viewer.BINARY().createBinary("/data/scratch/lindnefn/aste/B/matC_MeshA"))
Best,
Florian
>
> The code is:
>
> import pickle
>
>
> filename = 'debug'
> with open(filename + '.pickle', 'rb') as input:
> unpick = pickle.Unpickler(input)
> M = unpick.load()
> temp = M.getVecLeft()
>
> pass
>
>
> and the error message is:
> [0]PETSC ERROR: ------------------------------------------------------------------------
> --------------------------------------------------------------------------
> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> with errorcode 59.
> [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
>
> [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
> You may or may not see output from other processes, depending on
> [0]PETSC ERROR: to get more information on the crash.
> exactly when Open MPI kills them.
>
>
>
>
> 此致
> 敬礼
> 张骥(博士研究生)
> 北京计算科学研究中心
> 北京市海淀区西北旺东路10号院东区9号楼 (100193)
>
> Best,
> Regards,
> Zhang Ji, PhD student
> Beijing Computational Science Research Center
> Zhongguancun Software Park II, No. 10 Dongbeiwang West Road, Haidian District, Beijing 100193, China
More information about the petsc-users
mailing list