[petsc-users] Galerkin projection using petsc4py
Thanasis Boutsikakis
thanasis.boutsikakis at corintis.com
Thu Oct 5 07:02:16 CDT 2023
Thanks Pierre! So I tried this and got a segmentation fault. Is this supposed to work right off the bat or am I missing sth?
[0]PETSC ERROR: ------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and https://petsc.org/release/faq/
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is causing the crash.
Abort(59) on node 0 (rank 0 in comm 0): application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
"""Experimenting with PETSc mat-mat multiplication"""
import time
import numpy as np
from colorama import Fore
from firedrake import COMM_SELF, COMM_WORLD
from firedrake.petsc import PETSc
from mpi4py import MPI
from numpy.testing import assert_array_almost_equal
from utilities import (
Print,
create_petsc_matrix,
print_matrix_partitioning,
)
nproc = COMM_WORLD.size
rank = COMM_WORLD.rank
# --------------------------------------------
# EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix Phi
# A' = Phi.T * A * Phi
# [k x k] <- [k x m] x [m x m] x [m x k]
# --------------------------------------------
m, k = 11, 7
# Generate the random numpy matrices
np.random.seed(0) # sets the seed to 0
A_np = np.random.randint(low=0, high=6, size=(m, m))
Phi_np = np.random.randint(low=0, high=6, size=(m, k))
# --------------------------------------------
# TEST: Galerking projection of numpy matrices A_np and Phi_np
# --------------------------------------------
Aprime_np = Phi_np.T @ A_np @ Phi_np
Print(f"MATRIX Aprime_np [{Aprime_np.shape[0]}x{Aprime_np.shape[1]}]")
Print(f"{Aprime_np}")
# Create A as an mpi matrix distributed on each process
A = create_petsc_matrix(A_np, sparse=False)
# Create Phi as an mpi matrix distributed on each process
Phi = create_petsc_matrix(Phi_np, sparse=False)
# Create an empty PETSc matrix object to store the result of the PtAP operation.
# This will hold the result A' = Phi.T * A * Phi after the computation.
A_prime = create_petsc_matrix(np.zeros((k, k)), sparse=False)
# Perform the PtAP (Phi Transpose times A times Phi) operation.
# In mathematical terms, this operation is A' = Phi.T * A * Phi.
# A_prime will store the result of the operation.
Phi.PtAP(A, A_prime)
> On 5 Oct 2023, at 13:22, Pierre Jolivet <pierre at joliv.et> wrote:
>
> How about using ptap which will use MatPtAP?
> It will be more efficient (and it will help you bypass the issue).
>
> Thanks,
> Pierre
>
>> On 5 Oct 2023, at 1:18 PM, Thanasis Boutsikakis <thanasis.boutsikakis at corintis.com> wrote:
>>
>> Sorry, forgot function create_petsc_matrix()
>>
>> def create_petsc_matrix(input_array sparse=True):
>> """Create a PETSc matrix from an input_array
>>
>> Args:
>> input_array (np array): Input array
>> partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
>> sparse (bool, optional): Toggle for sparese or dense. Defaults to True.
>>
>> Returns:
>> PETSc mat: PETSc matrix
>> """
>> # Check if input_array is 1D and reshape if necessary
>> assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
>> global_rows, global_cols = input_array.shape
>>
>> size = ((None, global_rows), (global_cols, global_cols))
>>
>> # Create a sparse or dense matrix based on the 'sparse' argument
>> if sparse:
>> matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
>> else:
>> matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
>> matrix.setUp()
>>
>> local_rows_start, local_rows_end = matrix.getOwnershipRange()
>>
>> for counter, i in enumerate(range(local_rows_start, local_rows_end)):
>> # Calculate the correct row in the array for the current process
>> row_in_array = counter + local_rows_start
>> matrix.setValues(
>> i, range(global_cols), input_array[row_in_array, :], addv=False
>> )
>>
>> # Assembly the matrix to compute the final structure
>> matrix.assemblyBegin()
>> matrix.assemblyEnd()
>>
>> return matrix
>>
>>> On 5 Oct 2023, at 13:09, Thanasis Boutsikakis <thanasis.boutsikakis at corintis.com> wrote:
>>>
>>> Hi everyone,
>>>
>>> I am trying a Galerkin projection (see MFE below) and I cannot get the Phi.transposeMatMult(A, A1) work. The error is
>>>
>>> Phi.transposeMatMult(A, A1)
>>> File "petsc4py/PETSc/Mat.pyx", line 1514, in petsc4py.PETSc.Mat.transposeMatMult
>>> petsc4py.PETSc.Error: error code 56
>>> [0] MatTransposeMatMult() at /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:10135
>>> [0] MatProduct_Private() at /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9989
>>> [0] No support for this operation for this object type
>>> [0] Call MatProductCreate() first
>>>
>>> Do you know if these exposed to petsc4py or maybe there is another way? I cannot get the MFE to work (neither in sequential nor in parallel)
>>>
>>> """Experimenting with PETSc mat-mat multiplication"""
>>>
>>> import time
>>>
>>> import numpy as np
>>> from colorama import Fore
>>> from firedrake import COMM_SELF, COMM_WORLD
>>> from firedrake.petsc import PETSc
>>> from mpi4py import MPI
>>> from numpy.testing import assert_array_almost_equal
>>>
>>> from utilities import (
>>> Print,
>>> create_petsc_matrix,
>>> )
>>>
>>> nproc = COMM_WORLD.size
>>> rank = COMM_WORLD.rank
>>>
>>> # --------------------------------------------
>>> # EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix Phi
>>> # A' = Phi.T * A * Phi
>>> # [k x k] <- [k x m] x [m x m] x [m x k]
>>> # --------------------------------------------
>>>
>>> m, k = 11, 7
>>> # Generate the random numpy matrices
>>> np.random.seed(0) # sets the seed to 0
>>> A_np = np.random.randint(low=0, high=6, size=(m, m))
>>> Phi_np = np.random.randint(low=0, high=6, size=(m, k))
>>>
>>> # Create A as an mpi matrix distributed on each process
>>> A = create_petsc_matrix(A_np)
>>>
>>> # Create Phi as an mpi matrix distributed on each process
>>> Phi = create_petsc_matrix(Phi_np)
>>>
>>> A1 = create_petsc_matrix(np.zeros((k, m)))
>>>
>>> # Now A1 contains the result of Phi^T * A
>>> Phi.transposeMatMult(A, A1)
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20231005/b2819fd3/attachment.html>
More information about the petsc-users
mailing list