[petsc-dev] Segmentation faults in MatMatMult & MatTransposeMatMult

Pierre Jolivet pierre.jolivet at enseeiht.fr
Tue Jan 15 02:50:08 CST 2019


OK, I was wrong about MATAIJ, as Jed already pointed out.
What about BAIJ or Dense matrices?
What about VecCreateMPIWithArray which seems to explicitly call VecCreate_MPI_Private which explicitly sets the type to VECMPI https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522 <https://www.mcs.anl.gov/petsc/petsc-current/src/vec/vec/impls/mpi/pbvec.c.html#line522> so that I cannot do a MatMult with a MATAIJ with a communicator of size 1?

Thanks,
Pierre  

> On 15 Jan 2019, at 9:40 AM, Dave May <dave.mayhem23 at gmail.com> wrote:
> 
> 
> 
> On Tue, 15 Jan 2019 at 05:18, Pierre Jolivet via petsc-dev <petsc-dev at mcs.anl.gov <mailto:petsc-dev at mcs.anl.gov>> wrote:
> Cf. the end of my sentence: "(I know, I could switch to SeqAIJ_SeqDense, but that is not an option I have right now)”
> All my Mat are of type MATMPIX. Switching to MATX here as you suggested would mean that I need to add a bunch of if(comm_size == 1) MatSeqXSetPreallocation else MatMPIXSetPreallocation in the rest of my code, which is something I would rather avoid.
> 
> Actually this is not the case.
> 
> If you do as Hong suggests and use MATAIJ then the switch for comm_size for Seq or MPI is done internally to MatCreate and is not required in the user code. Additionally, in your preallocation routine, you can call safely both (without your comm_size if statement)
> MatSeqAIJSetPreallocation()
> and
> MatMPIAIJSetPreallocation()
> If the matrix type matches that expected by the API, then it gets executed. Otherwise nothing happens.
> 
> This is done all over the place to enable the matrix type to be a run-time choice.
> 
> For example, see here
> https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ <https://www.mcs.anl.gov/petsc/petsc-current/src/dm/impls/da/fdda.c.html#DMCreateMatrix_DA_3d_MPIAIJ>
> and look at lines 1511 and 1512. 
> 
> Thanks,
>   Dave
> 
> 
> 
>  
> 
> Thanks,
> Pierre
> 
>> On 14 Jan 2019, at 10:30 PM, Zhang, Hong <hzhang at mcs.anl.gov <mailto:hzhang at mcs.anl.gov>> wrote:
>> 
>> Replace 
>> ierr = MatSetType(A, MATMPIAIJ);CHKERRQ(ierr);
>> to
>> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
>> 
>> Replace 
>> ierr = MatSetType(B, MATMPIDENSE)i;CHKERRQ(ierr);
>> to
>> ierr = MatSetType(B, MATDENSE)i;CHKERRQ(ierr);
>> 
>> Then add
>> MatSeqAIJSetPreallocation()
>> MatSeqDenseSetPreallocation()
>> 
>> Hong
>> 
>> On Mon, Jan 14, 2019 at 2:51 PM Pierre Jolivet via petsc-dev <petsc-dev at mcs.anl.gov <mailto:petsc-dev at mcs.anl.gov>> wrote:
>> Hello,
>> Is there any chance to get MatMatMult_MPIAIJ_MPIDense  and MatTransposeMatMult_MPIAIJ_MPIDense fixed so that the attached program could run _with a single_ process? (I know, I could switch to SeqAIJ_SeqDense, but that is not an option I have right now)
>> 
>> Thanks in advance,
>> Pierre
>> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20190115/b7f0b02a/attachment.html>


More information about the petsc-dev mailing list