The Istart and Iend are the rows present on this particular processor,<br>
which can be obtain from MatGetOwnershipRange().<br>
<br>
Matt<br><br><div><span class="gmail_quote">On 6/15/06, <b class="gmail_sendername">Evrim Dizemen</b> <<a href="mailto:gudik@ae.metu.edu.tr">gudik@ae.metu.edu.tr</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Dear Randall,<br><br>I guess i began to understand the consepts but there is still a missing<br>point that i do not know when and how we define Istart and Iend. I'll be<br>glade if you can send me the pre_op3d routine so i can see the algorithm
<br>which is a black box for me now.<br><br>Thanks a lot<br><br>EVRIM<br><br><br>Randall Mackie wrote:<br>> Hi Evrim,<br>><br>> It's quite easy to modify your Fortran code to do what you want. I<br>> thought<br>
> I had written it all out before, but I'll try again. There are many ways<br>> to do this, but I'll start with the easiest, at least if you're going to<br>> just modify your current sequential code.<br>><br>> Let's say that your matrix has np global rows. Then
<br>><br>> call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,np,b,ierr)<br>> call VecDuplicate(b,xsol,ierr)<br>> call VecGetLocalSize(b,mloc,ierr)<br>> call VecGetOwnershipRange(b,Istart,Iend,ierr)
<br>><br>> do i=Istart+1,Iend<br>> loc(i)=i-1<br>> end do<br>><br>> These statements create parallel vectors for the solution (xsol) and<br>> the right hand side (b). The vector loc(i)is used to set values in the
<br>> vectors later.<br>><br>> Then<br>><br>> ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -<br>> ! Create the linear solver context<br>> ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
<br>><br>> call KSPCreate(PETSC_COMM_WORLD,ksp,ierr)<br>><br>> ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -<br>> ! Create the scatter context for getting results back to
<br>> ! each node.<br>> ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -<br>><br>> call VecScatterCreateToAll(xsol,xToLocalAll,xseq,ierr)<br>><br>> ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
<br>> ! Create the matrix that defines the linear system, Ax = b,<br>> ! for the EM problem.<br>> ! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -<br>><br>> call pre_op3d(l,m,nzn,Istart,Iend,ijkhx,ijkhy,ijkhz,d_nnz,o_nnz)
<br>><br>> call MatCreateMPIAIJ(PETSC_COMM_WORLD,mloc,mloc,np,np,<br>> . PETSC_NULL_INTEGER, d_nnz, PETSC_NULL_INTEGER,<br>> . o_nnz,A,ierr)<br>><br>> call set_op3d(A,l,m,nzn,period,resist,x,y,z,Istart,Iend,
<br>> . ijkhx,ijkhy,ijkhz)<br>><br>><br>> The subroutine pre_op3d is an important routine and it figures out<br>> how to pre-allocate space for your parallel matrix. This will be<br>> the difference between near-instanteous assembly and
2.5 hours that<br>> you experienced. Basically, it just computes the global column numbers,<br>> and figures out if they are between Istart and Iend. I can send you<br>> my subroutine if you'd like.<br>><br>> The subroutine set_op3d.F actually assembles the parallel matrix and
<br>> goes like this:<br>><br>><br>> jj=0<br>><br>> do i=1,l<br>> do k=2,n<br>> do j=2,m<br>><br>> jj=jj+1<br>> row = jj-1<br>><br>
> IF (jj >= Istart+1 .and. jj <= Iend) THEN<br>><br>> compute elements...<br>><br>> call MatSetValues(A,i1,row,ic,col,v,INSERT_VALUES,<br>> . ierr)
<br>><br>> END IF<br>><br>> end do<br>> end do<br>> end do<br>><br>> At the end,<br>><br>> call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr)<br>> call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)
<br>><br>><br>> Again, because you compute the pre-allocation, this is near-instanteous,<br>> even for large models (larger than you're using).<br>><br>><br>> Once you do that, you're golden:<br>><br>
> call KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN,ierr)<br>><br>> etc.<br>><br>> Randy M.<br>> San Francisco<br>><br>><br>> Evrim Dizemen wrote:<br>>> Hi all,<br>>><br>
>> Again thanks for your comments. I guess i can not define the problem<br>>> correctly. I have a sequantial fortran code giving me the global<br>>> matrix of 200000x200000. The code writes the matrix in a binary file
<br>>> at little endian mode (i can write only the nonzero terms or the<br>>> entire matrix). I tried to change the binary mode to big endian and<br>>> read the global matrix by a c program as in the example
<br>>> /src/mat/example/tests/ex31.c. However the program reads binary file<br>>> wrong and gives the following error message but the true value of<br>>> no-nonzero in the binary file is 6 (for the test case 3x3 matrix) :
<br>>><br>>> reading matrix in binary from matrix.dat ...<br>>> ------------------------------------------------------------------------<br>>> Petsc Release Version 2.3.1, Patch 13, Wed May 10 11:08:35 CDT 2006
<br>>> BK revision: balay@asterix.mcs.anl.gov|ChangeSet|20060510160640|13832<br>>> See docs/changes/index.html for recent updates.<br>>> See docs/faq.html for hints about trouble shooting.<br>>> See docs/index.html for manual pages.
<br>>> ------------------------------------------------------------------------<br>>> ./ex31 on a linux named <a href="http://akbaba.ae.metu.edu.tr">akbaba.ae.metu.edu.tr</a> by evrim Thu Jun 15<br>>> 09:26:36 2006
<br>>> Libraries linked from /home/evrim/petsc-2.3.1-p13/lib/linux<br>>> Configure run at Tue May 30 10:26:48 2006<br>>> Configure options --with-scalar-type=complex --with-shared=0<br>>> ------------------------------------------------------------------------
<br>>> [0]PETSC ERROR: MatLoad_SeqAIJ() line 3055 in<br>>> src/mat/impls/aij/seq/aij.c<br>>> [0]PETSC ERROR: Read from file failed!<br>>> [0]PETSC ERROR: Inconsistant matrix data in file. no-nonzeros =
<br>>> 100663296, sum-row-lengths = 234300<br>>> !<br>>> [0]PETSC ERROR: MatLoad() line 149 in src/mat/utils/matio.c<br>>> [0]PETSC ERROR: main() line 37 in src/mat/examples/tests/ex31.c<br>>>
<br>>> I want to send the global matrix at once to Petsc by a written input<br>>> file (as i'm working on now) or by sending the matrix array from my<br>>> fortran code and then partition it and solve iteratively. After the
<br>>> solution i also want to get the solution vector back to my fortran<br>>> code. As i told in the previous mails i tried to send the matrix<br>>> array to Petsc and used MatSetValues as reading one value at time in
<br>>> a do loop but it took about 2,5 hours to read the global matrix.<br>>> Additionally i tried to read a row at a time but can not figure out<br>>> a algorithm for this. Hence i do not prefer to create the matrix
<br>>> again in Petsc by MatSetValues.<br>>><br>>> Aside i figured out that the binary files written in fortran and c<br>>> are completely different from each other (fortran adds the size of<br>>> the characters to the beginning and end of each character) so i wrote
<br>>> a c interface code to get the matrix array from the fortran code and<br>>> write it to a binary file in c format. By this code i avoided from<br>>> the additional information in the binary file but i still have the
<br>>> endianness problem.<br>>><br>>> I know that i asked so much but since i'm a rookie in parallel<br>>> programming, c language and library using i really need your comments<br>>> on my problem. Sorry for this long mail and thanks a lot for your
<br>>> kind effort on guiding me.<br>>><br>>> Thanks<br>>><br>>> EVRIM<br>>><br>><br><br></blockquote></div><br><br clear="all"><br>-- <br>"Failure has a thousand explanations. Success doesn't need one" -- Sir Alec Guiness