[petsc-users] About MatView

Barry Smith bsmith at mcs.anl.gov
Sat Jun 20 12:38:19 CDT 2015


   Eric,

    
> On Jun 20, 2015, at 1:42 AM, Longyin Cui <cuilongyin at gmail.com> wrote:
> 
> OMG you are real!!!
> OK, All my error message looks like this:
> PETSc Error ... exiting
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 13 with PID 1816 on
> node cnode174.local exiting improperly. There are two reasons this could occur:
> 
> 1. this process did not call "init" before exiting, but others in
> the job did. This can cause a job to hang indefinitely while it waits
> for all processes to call "init". By rule, if one process calls "init",
> then ALL processes must call "init" prior to termination.
> 
> 2. this process called "init", but exited without calling "finalize".
> By rule, all processes that call "init" MUST call "finalize" prior to
> exiting or it will be considered an "abnormal termination"
> 
> This may have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).

   This crash doesn't seem to have anything to do in particular with code below. Do the PETSc examples run in parallel? Does your code that you ran have a PetscInitialize() in it?  What about running on two processors, does that work?

> 
> You are right, I did use PETSC_COMM_SELE, and when I used PETSC_COMM_WORLD alone I could get the entire matrix printed. But, this is one whole matrix in one file. The reason I used: PetscViewerASCIIOpen ( PETSC_COMM_SELF, "mat.output", &viewer); and MatView (matrix,viewer); was because it says "Each processor can instead write its own independent output by specifying the communicator PETSC_COMM_SELF".  

   Yikes, this is completely untrue and has been for decades. We have no way of saving the matrix in its parts; you cannot use use a PETSC_COMM_SELF viewer with a parallel matrix. Sorry about the wrong information in the documentation; I have fixed it.

   Why can't you just save the matrix in one file and then compare it? We don't provide a way to save objects one part per process because we think it is a bad model for parallel computing since the result depends on the number of processors you are using.

   Barry

> 
> Also, I tried this as well, which failed, same error message :
>          PetscMPIInt    my_rank;
>          MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
>          string str = KILLME(my_rank); // KILLME is a int to string function...
>          const char * c = str.c_str();
>          PetscViewer viewer;
>          PetscViewerASCIIOpen(PETSC_COMM_WORLD, c , &viewer);
>          MatView(impOP,viewer); //impOP is the huge matrix.
>          PetscViewerDestroy(&viewer);
> 
> I was trying to generate 16 files recording each matrix hold by each processor so I can conpare them with the big matrix...so, what do you think
> 
> Thank you very muuuch.
> 
> Longyin Cui (or you know me as Eric);
> Student from C.S. division;
> Cell: 7407047169;
> return 0;
> 
> 
> On Sat, Jun 20, 2015 at 1:34 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> 
>   You need to cut and paste and send the entire error message: "not working" makes it very difficult for us to know what has gone wrong.
> Based on the code fragment you sent I guess one of your problems is that the viewer communicator is not the same as the matrix communicator. Since the  matrix is on 16 processors (I am guessing PETSC_COMM_WORLD) the viewer communicator must also be the same (also PETSC_COMM_WORLD).
> The simplest code you can do is
> 
> > PetscViewerASCIIOpen(PETSC_COMM_WORLD,"stdout",&viewer);
> >        MatView(impOP,viewer);
> 
>   but you can get a similar effect with the command line option -mat_view and not write any code at all (the less code you have to write the better).
> 
>   Barry
> 
> 
> > On Jun 19, 2015, at 10:42 PM, Longyin Cui <cuilongyin at gmail.com> wrote:
> >
> > Hi dear whoever reads this:
> >
> > I have a quick question:
> > After matrix assembly, suppouse I have matrix A. Assuming I used 16 processors, if I want each processor to print out their local contents of the A, how do I proceed? (I simply want to know how the matrix is stored from generating to communicating to solving, so I get to display all the time to get better undersdtanding)
> >
> > I read the examples, and I tried things like below and sooooo many different codes from examples, it still is not working.
> >        PetscViewer viewer;
> >        PetscMPIInt my_rank;
> >        MPI_Comm_rank(PETSC_COMM_WORLD,&my_rank);
> >        PetscPrintf(MPI_COMM_SELF,"[%d] rank\n",my_rank);
> >        PetscViewerASCIIOpen(MPI_COMM_SELF,NULL,&viewer);
> >        PetscViewerPushFormat(viewer,PETSC_VIEWER_ASCII_INFO);
> >        MatView(impOP,viewer);
> >
> > Plea......se give me some hints
> >
> > Thank you so very much!
> >
> >
> > Longyin Cui (or you know me as Eric);
> > Student from C.S. division;
> > Cell: 7407047169;
> > return 0;
> >
> 
> 



More information about the petsc-users mailing list