[petsc-users] parallel computing error

Pierre Jolivet pierre.jolivet at lip6.fr
Fri May 5 07:56:10 CDT 2023



> On 5 May 2023, at 2:00 PM, ­권승리 / 학생 / 항공우주공학과 <ksl7912 at snu.ac.kr> wrote:
> 
> Dear Pierre Jolivet
> 
> Thank you for your explanation.
> 
> I will try to use a converting matrix.
> 
> I know it's really inefficient, but I need an inverse matrix (inv(A)) itself for my research.
> 
> If parallel computing is difficult to get inv(A),  can I run the part related to MatMatSolve with a single core?

Yes.

Thanks,
Pierre

> Best,
> Seung Lee Kwon
> 
> 2023년 5월 5일 (금) 오후 8:35, Pierre Jolivet <pierre.jolivet at lip6.fr <mailto:pierre.jolivet at lip6.fr>>님이 작성:
>> 
>> 
>>> On 5 May 2023, at 1:25 PM, ­권승리 / 학생 / 항공우주공학과 <ksl7912 at snu.ac.kr <mailto:ksl7912 at snu.ac.kr>> wrote:
>>> 
>>> Dear Matthew Knepley
>>> 
>>> However, I've already installed ScaLAPACK.
>>> cd $PETSC_DIR
>>> ./configure --download-mpich --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 -march=native -mtune=native' --download-mumps --download-scalapack --download-parmetis --download-metis --download-parmetis --download-hpddm --download-slepc
>>> 
>>> Is there some way to use ScaLAPCK?
>> 
>> You need to convert your MatDense to MatSCALAPACK before the call to MatMatSolve().
>> This library (ScaLAPACK, but also Elemental) has severe limitations with respect to the matrix distribution.
>> Depending on what you are doing, you may be better of using KSPMatSolve() and computing only an approximation of the solution with a cheap preconditioner (I don’t recall you telling us why you need to do such an operation even though we told you it was not practical — or maybe I’m being confused by another thread).
>> 
>> Thanks,
>> Pierre
>> 
>>> Or, Can I run the part related to MatMatSolve with a single core? 
>>> 
>>> 2023년 5월 5일 (금) 오후 6:21, Matthew Knepley <knepley at gmail.com <mailto:knepley at gmail.com>>님이 작성:
>>>> On Fri, May 5, 2023 at 3:49 AM ­권승리 / 학생 / 항공우주공학과 <ksl7912 at snu.ac.kr <mailto:ksl7912 at snu.ac.kr>> wrote:
>>>>> Dear Barry Smith
>>>>> 
>>>>> Thanks to you, I knew the difference between MATAIJ and MATDENSE.
>>>>> 
>>>>> However, I still have some problems.
>>>>> 
>>>>> There is no problem when I run with a single core. But, MatGetFactor error occurs when using multi-core.
>>>>> 
>>>>> Could you give me some advice?
>>>>> 
>>>>> The error message is
>>>>> 
>>>>> [0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
>>>>> [0]PETSC ERROR: See https://petsc.org/release/overview/linear_solve_table/ for possible LU and Cholesky solvers
>>>>> [0]PETSC ERROR: MatSolverType petsc does not support matrix type mpidense
>>>> 
>>>> PETSc uses 3rd party packages for parallel dense factorization. You would need to reconfigure with either ScaLAPACK
>>>> or Elemental.
>>>> 
>>>>   Thanks,
>>>> 
>>>>      Matt
>>>>  
>>>>> [0]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
>>>>> [0]PETSC ERROR: Petsc Release Version 3.18.5, unknown 
>>>>> [0]PETSC ERROR: ./app on a arch-linux-c-opt named ubuntu by ksl Fri May  5 00:35:23 2023
>>>>> [0]PETSC ERROR: Configure options --download-mpich --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" --download-mumps --download-scalapack --download-parmetis --download-metis --download-parmetis --download-hpddm --download-slepc
>>>>> [0]PETSC ERROR: #1 MatGetFactor() at /home/ksl/petsc/src/mat/interface/matrix.c:4757
>>>>> [0]PETSC ERROR: #2 main() at /home/ksl/Downloads/coding_test/coding/a1.c:66
>>>>> [0]PETSC ERROR: No PETSc Option Table entries
>>>>> [0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov----------
>>>>> application called MPI_Abort(MPI_COMM_SELF, 92) - process 0
>>>>> 
>>>>> My code is below:
>>>>> 
>>>>> int main(int argc, char** args)
>>>>> {
>>>>>     Mat A, E, A_temp, A_fac;
>>>>>     int n = 15;
>>>>>     PetscInitialize(&argc, &args, NULL, NULL);
>>>>>     PetscCallMPI(MPI_Comm_size(PETSC_COMM_WORLD, &size));
>>>>> 
>>>>>     PetscCall(MatCreate(PETSC_COMM_WORLD, &A));
>>>>>     PetscCall(MatSetType(A,MATDENSE));
>>>>>     PetscCall(MatSetSizes(A, PETSC_DECIDE, PETSC_DECIDE, n, n));
>>>>>     PetscCall(MatSetFromOptions(A));
>>>>>     PetscCall(MatSetUp(A));
>>>>>     // Insert values
>>>>>     double val;
>>>>>     for (int i = 0; i < n; i++) {
>>>>>         for (int j = 0; j < n; j++) {
>>>>>             if (i == j){
>>>>>                 val = 2.0;
>>>>>             }
>>>>>             else{
>>>>>                 val = 1.0;
>>>>>             }
>>>>>             PetscCall(MatSetValue(A, i, j, val, INSERT_VALUES));
>>>>>         }
>>>>>     }
>>>>>     PetscCall(MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY));
>>>>>     PetscCall(MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY));
>>>>> 
>>>>>     // Make Identity matrix
>>>>>     PetscCall(MatCreate(PETSC_COMM_WORLD, &E));
>>>>>     PetscCall(MatSetType(E,MATDENSE));
>>>>>     PetscCall(MatSetSizes(E, PETSC_DECIDE, PETSC_DECIDE, n, n));
>>>>>     PetscCall(MatSetFromOptions(E));
>>>>>     PetscCall(MatSetUp(E));
>>>>>     PetscCall(MatShift(E,1.0));
>>>>>     PetscCall(MatAssemblyBegin(E, MAT_FINAL_ASSEMBLY));
>>>>>     PetscCall(MatAssemblyEnd(E, MAT_FINAL_ASSEMBLY));
>>>>>     
>>>>>     PetscCall(MatDuplicate(A, MAT_DO_NOT_COPY_VALUES, &A_temp));
>>>>>     PetscCall(MatGetFactor(A, MATSOLVERPETSC, MAT_FACTOR_LU, &A_fac));
>>>>> 
>>>>>     IS isr, isc; MatFactorInfo info;
>>>>>     MatGetOrdering(A, MATORDERINGNATURAL, &isr, &isc);
>>>>>     PetscCall(MatLUFactorSymbolic(A_fac, A, isr, isc, &info));
>>>>>     PetscCall(MatLUFactorNumeric(A_fac, A, &info));
>>>>>     MatMatSolve(A_fac, E, A_temp);
>>>>>     
>>>>>     PetscCall(MatView(A_temp, PETSC_VIEWER_STDOUT_WORLD));
>>>>>     MatDestroy(&A);
>>>>>     MatDestroy(&A_temp);
>>>>>     MatDestroy(&A_fac);
>>>>>     MatDestroy(&E);
>>>>>     PetscCall(PetscFinalize());
>>>>> }
>>>>> 
>>>>> Best regards
>>>>> Seung Lee Kwon
>>>>> 
>>>>> 2023년 5월 4일 (목) 오후 10:19, Barry Smith <bsmith at petsc.dev <mailto:bsmith at petsc.dev>>님이 작성:
>>>>>> 
>>>>>>   The code in ex125.c contains
>>>>>> 
>>>>>>  PetscCall(MatCreate(PETSC_COMM_WORLD, &C));
>>>>>>   PetscCall(MatSetOptionsPrefix(C, "rhs_"));
>>>>>>   PetscCall(MatSetSizes(C, m, PETSC_DECIDE, PETSC_DECIDE, nrhs));
>>>>>>   PetscCall(MatSetType(C, MATDENSE));
>>>>>>   PetscCall(MatSetFromOptions(C));
>>>>>>   PetscCall(MatSetUp(C));
>>>>>> 
>>>>>> This dense parallel matrix is suitable for passing to MatMatSolve() as the right-hand side matrix. Note it is created with PETSC_COMM_WORLD and its type is set to be MATDENSE.
>>>>>> 
>>>>>>   You may need to make a sample code by stripping out all the excess code in ex125.c to just create an MATAIJ and MATDENSE and solves with MatMatSolve() to determine why you code does not work.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On May 4, 2023, at 3:20 AM, ­권승리 / 학생 / 항공우주공학과 <ksl7912 at snu.ac.kr <mailto:ksl7912 at snu.ac.kr>> wrote:
>>>>>>> 
>>>>>>> Dear Barry Smith
>>>>>>> 
>>>>>>> Thank you for your reply.
>>>>>>> 
>>>>>>> I've already installed MUMPS.
>>>>>>> 
>>>>>>> And I checked the example you said (ex125.c), I don't understand why the RHS matrix becomes the SeqDense matrix.
>>>>>>> 
>>>>>>> Could you explain in more detail?
>>>>>>> 
>>>>>>> Best regards
>>>>>>> Seung Lee Kwon
>>>>>>> 
>>>>>>> 2023년 5월 4일 (목) 오후 12:08, Barry Smith <bsmith at petsc.dev <mailto:bsmith at petsc.dev>>님이 작성:
>>>>>>>> 
>>>>>>>>   You can configure with MUMPS ./configure --download-mumps --download-scalapack --download-ptscotch --download-metis --download-parmetis 
>>>>>>>> 
>>>>>>>>   And then use MatMatSolve() as in src/mat/tests/ex125.c with parallel MatMatSolve() using MUMPS as the solver.
>>>>>>>> 
>>>>>>>>   Barry
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> On May 3, 2023, at 10:29 PM, ­권승리 / 학생 / 항공우주공학과 <ksl7912 at snu.ac.kr <mailto:ksl7912 at snu.ac.kr>> wrote:
>>>>>>>>> 
>>>>>>>>> Dear developers
>>>>>>>>> 
>>>>>>>>> Thank you for your explanation.
>>>>>>>>> 
>>>>>>>>> But I should use the MatCreateSeqDense because I want to use the MatMatSolve that B matrix must be a SeqDense matrix.
>>>>>>>>> 
>>>>>>>>> Using MatMatSolve is an inevitable part of my code.
>>>>>>>>> 
>>>>>>>>> Could you give me a comment to avoid this error?
>>>>>>>>> 
>>>>>>>>> Best,
>>>>>>>>> 
>>>>>>>>> Seung Lee Kwon
>>>>>>>>> 
>>>>>>>>> 2023년 5월 3일 (수) 오후 7:30, Matthew Knepley <knepley at gmail.com <mailto:knepley at gmail.com>>님이 작성:
>>>>>>>>>> On Wed, May 3, 2023 at 6:05 AM ­권승리 / 학생 / 항공우주공학과 <ksl7912 at snu.ac.kr <mailto:ksl7912 at snu.ac.kr>> wrote:
>>>>>>>>>>> Dear developers
>>>>>>>>>>> 
>>>>>>>>>>> I'm trying to use parallel computing and I ran the command 'mpirun -np 4 ./app'
>>>>>>>>>>> 
>>>>>>>>>>> In this case, there are two problems.
>>>>>>>>>>> 
>>>>>>>>>>> First, I encountered error message
>>>>>>>>>>> ///
>>>>>>>>>>> [0]PETSC ERROR: [1]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
>>>>>>>>>>> [1]PETSC ERROR: Invalid argument
>>>>>>>>>>> [1]PETSC ERROR: Comm must be of size 1
>>>>>>>>>>> ///
>>>>>>>>>>> The code on the error position is 
>>>>>>>>>>> MatCreateSeqDense(PETSC_COMM_SELF, nns, ns, NULL, &Kns));
>>>>>>>>>> 
>>>>>>>>>> 1) "Seq" means sequential, that is "not parallel".
>>>>>>>>>> 
>>>>>>>>>> 2) This line should still be fine since PETSC_COMM_SELF is a serial communicator
>>>>>>>>>> 
>>>>>>>>>> 3) You should be checking the error code for each call, maybe using the CHKERRQ() macro
>>>>>>>>>> 
>>>>>>>>>> 4) Please always send the entire error message, not a snippet
>>>>>>>>>> 
>>>>>>>>>>   THanks
>>>>>>>>>> 
>>>>>>>>>>      Matt
>>>>>>>>>>  
>>>>>>>>>>> Could "MatCreateSeqDense" not be used in parallel computing?
>>>>>>>>>>> 
>>>>>>>>>>> Second, the same error message is repeated as many times as the number of cores.
>>>>>>>>>>> if I use command -np 4, then the error message is repeated 4 times.
>>>>>>>>>>> Could you recommend some advice related to this?
>>>>>>>>>>> 
>>>>>>>>>>> Best,
>>>>>>>>>>> Seung Lee Kwon
>>>>>>>>>>> 
>>>>>>>>>>> -- 
>>>>>>>>>>> Seung Lee Kwon, Ph.D.Candidate
>>>>>>>>>>> Aerospace Structures and Materials Laboratory
>>>>>>>>>>> Department of Mechanical and Aerospace Engineering
>>>>>>>>>>> Seoul National University
>>>>>>>>>>> Building 300 Rm 503, Gwanak-ro 1, Gwanak-gu, Seoul, South Korea, 08826
>>>>>>>>>>> E-mail : ksl7912 at snu.ac.kr <mailto:ksl7912 at snu.ac.kr>
>>>>>>>>>>> Office : +82-2-880-7389
>>>>>>>>>>> C. P : +82-10-4695-1062
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> -- 
>>>>>>>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>>>>>>>>>> -- Norbert Wiener
>>>>>>>>>> 
>>>>>>>>>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> -- 
>>>>>>>>> Seung Lee Kwon, Ph.D.Candidate
>>>>>>>>> Aerospace Structures and Materials Laboratory
>>>>>>>>> Department of Mechanical and Aerospace Engineering
>>>>>>>>> Seoul National University
>>>>>>>>> Building 300 Rm 503, Gwanak-ro 1, Gwanak-gu, Seoul, South Korea, 08826
>>>>>>>>> E-mail : ksl7912 at snu.ac.kr <mailto:ksl7912 at snu.ac.kr>
>>>>>>>>> Office : +82-2-880-7389
>>>>>>>>> C. P : +82-10-4695-1062
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> -- 
>>>>>>> Seung Lee Kwon, Ph.D.Candidate
>>>>>>> Aerospace Structures and Materials Laboratory
>>>>>>> Department of Mechanical and Aerospace Engineering
>>>>>>> Seoul National University
>>>>>>> Building 300 Rm 503, Gwanak-ro 1, Gwanak-gu, Seoul, South Korea, 08826
>>>>>>> E-mail : ksl7912 at snu.ac.kr <mailto:ksl7912 at snu.ac.kr>
>>>>>>> Office : +82-2-880-7389
>>>>>>> C. P : +82-10-4695-1062
>>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> Seung Lee Kwon, Ph.D.Candidate
>>>>> Aerospace Structures and Materials Laboratory
>>>>> Department of Mechanical and Aerospace Engineering
>>>>> Seoul National University
>>>>> Building 300 Rm 503, Gwanak-ro 1, Gwanak-gu, Seoul, South Korea, 08826
>>>>> E-mail : ksl7912 at snu.ac.kr <mailto:ksl7912 at snu.ac.kr>
>>>>> Office : +82-2-880-7389
>>>>> C. P : +82-10-4695-1062
>>>> 
>>>> 
>>>> -- 
>>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>>>> -- Norbert Wiener
>>>> 
>>>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
>>> 
>>> 
>>> -- 
>>> Seung Lee Kwon, Ph.D.Candidate
>>> Aerospace Structures and Materials Laboratory
>>> Department of Mechanical and Aerospace Engineering
>>> Seoul National University
>>> Building 300 Rm 503, Gwanak-ro 1, Gwanak-gu, Seoul, South Korea, 08826
>>> E-mail : ksl7912 at snu.ac.kr <mailto:ksl7912 at snu.ac.kr>
>>> Office : +82-2-880-7389
>>> C. P : +82-10-4695-1062
>> 
> 
> 
> -- 
> Seung Lee Kwon, Ph.D.Candidate
> Aerospace Structures and Materials Laboratory
> Department of Mechanical and Aerospace Engineering
> Seoul National University
> Building 300 Rm 503, Gwanak-ro 1, Gwanak-gu, Seoul, South Korea, 08826
> E-mail : ksl7912 at snu.ac.kr <mailto:ksl7912 at snu.ac.kr>
> Office : +82-2-880-7389
> C. P : +82-10-4695-1062

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230505/aa49e1b7/attachment-0001.html>


More information about the petsc-users mailing list