<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class=""><br class=""></div> Roland,<div class=""><br class=""></div><div class=""> The </div><div class=""><br class=""></div><div class=""><blockquote type="cite" class=""><div class=""><p class=""><i class="">inembed, istride, idist,</i><i class=""><br class=""></i><i class=""> data_out, onembed, ostride, odist,</i></p></div></blockquote><div>variables all need to be set rank by rank, (since each rank is providing a matrix with its own number of local rows which may be different on other ranks) and not using the global matrix_size or or size (which I assume comes from MPI_Comm_size()?).</div><div><br class=""></div><div>Use MatGetLocalSize(mat,&numrows,NULL); to get the number of rows on a process. Use this to set the need FFTW parameters.</div><div><br class=""></div><div>The reason the current code works when "my matrix size is an integer multiple of x, " is because then all the ranks have the same numrows and your formula produces the correct local size but otherwise your formula will not produce the correct local number of rows.</div><div><br class=""></div><div> Barry</div><div><br class=""></div><div><br class=""></div><div><br class=""></div><div><br class=""></div><div><br class=""></div><div><br class=""><blockquote type="cite" class=""><div class="">On Dec 8, 2020, at 8:42 AM, Roland Richter <<a href="mailto:roland.richter@ntnu.no" class="">roland.richter@ntnu.no</a>> wrote:</div><br class="Apple-interchange-newline"><div class="">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" class="">
<div class=""><p class="">I replaced the FFT-code with the following:</p><p class=""><i class=""> int FFT_rank = 1;</i><i class=""><br class="">
</i><i class=""> const int FFTW_size[] = {matrix_size};</i><i class=""><br class="">
</i><i class=""> int howmany = last_owned_row_index -
first_owned_row_index;</i><i class=""><br class="">
</i><i class=""> int idist = 1;</i><i class=""><br class="">
</i><i class=""> int odist = 1;</i><i class=""><br class="">
</i><i class=""> int istride = matrix_size / size;</i><i class=""><br class="">
</i><i class=""> int ostride = matrix_size / size;</i><i class=""><br class="">
</i><i class=""> const int inembed[] = {matrix_size}, onembed[] =
{matrix_size};</i><i class=""><br class="">
</i><i class=""> fplan = fftw_plan_many_dft(FFT_rank, FFTW_size,</i><i class=""><br class="">
</i><i class=""> howmany,</i><i class=""><br class="">
</i><i class=""> data_in, inembed, istride,
idist,</i><i class=""><br class="">
</i><i class=""> data_out, onembed, ostride,
odist,</i><i class=""><br class="">
</i><i class=""> FFTW_FORWARD,
FFTW_ESTIMATE);</i><i class=""><br class="">
</i><i class=""> bplan = fftw_plan_many_dft(FFT_rank, FFTW_size,</i><i class=""><br class="">
</i><i class=""> howmany,</i><i class=""><br class="">
</i><i class=""> data_out, inembed, istride,
idist,</i><i class=""><br class="">
</i><i class=""> data_out2, onembed, ostride,
odist,</i><i class=""><br class="">
</i><i class=""> FFTW_BACKWARD,
FFTW_ESTIMATE);</i></p><p class="">Now I get the expected results also for <b class="">mpirun -n x</b> with
x > 1, but only if my matrix size is an integer multiple of x,
else some parts of the resulting matrix are zeroed out. Therefore,
I assume I made a mistake here with inembed and onembed, but I am
not sure how they influence the result. Do you have any
suggestions?</p><p class="">Thanks,</p><p class="">Roland<br class="">
</p>
<div class="moz-cite-prefix">Am 08.12.20 um 14:55 schrieb Matthew
Knepley:<br class="">
</div>
<blockquote type="cite" cite="mid:CAMYG4GnWsRAxknDTE9pL9TJLMFe10CHbosmP5iF1g8nt3Z4vDw@mail.gmail.com" class="">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" class="">
<div dir="ltr" class="">
<div dir="ltr" class="">On Tue, Dec 8, 2020 at 8:40 AM Roland Richter
<<a href="mailto:roland.richter@ntnu.no" moz-do-not-send="true" class="">roland.richter@ntnu.no</a>> wrote:<br class="">
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div class=""><p class="">Dear all,</p><p class="">I tried the following code:</p><p class=""><i class="">int main(int argc, char **args) {</i><i class=""><br class="">
</i><i class=""> Mat C, F;</i><i class=""><br class="">
</i><i class=""> Vec x, y, z;</i><i class=""><br class="">
</i><i class=""> PetscViewer viewer;</i><i class=""><br class="">
</i><i class=""> PetscMPIInt rank, size;</i><i class=""><br class="">
</i><i class=""> PetscInitialize(&argc, &args, (char*)
0, help);</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> MPI_Comm_size(PETSC_COMM_WORLD, &size);</i><i class=""><br class="">
</i><i class=""> MPI_Comm_rank(PETSC_COMM_WORLD, &rank);</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> PetscPrintf(PETSC_COMM_WORLD,"Number of
processors = %d, rank = %d\n", size, rank);</i><i class=""><br class="">
</i><i class=""> // std::cout << "From rank "
<< rank << '\n';</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> //MatCreate(PETSC_COMM_WORLD, &C);</i><i class=""><br class="">
</i><i class=""> PetscViewerCreate(PETSC_COMM_WORLD,
&viewer);</i><i class=""><br class="">
</i><i class=""> PetscViewerSetType(viewer, PETSCVIEWERASCII);</i><i class=""><br class="">
</i><i class=""> arma::cx_mat local_mat, local_zero_mat;</i><i class=""><br class="">
</i><i class=""> const size_t matrix_size = 5;</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> MatCreateDense(PETSC_COMM_WORLD,
PETSC_DECIDE, PETSC_DECIDE, matrix_size, matrix_size,
NULL, &C);</i><i class=""><br class="">
</i><i class=""> MatCreateDense(PETSC_COMM_WORLD,
PETSC_DECIDE, PETSC_DECIDE, matrix_size, matrix_size,
NULL, &F);</i><i class=""><br class="">
</i><i class=""> if(rank == 0) {</i><i class=""><br class="">
</i><i class=""> arma::Col<int> indices =
arma::linspace<arma::Col<int>>(0,
matrix_size - 1, matrix_size);</i><i class=""><br class="">
</i><i class=""> //if(rank == 0) {</i><i class=""><br class="">
</i><i class=""> local_mat =
arma::randu<arma::cx_mat>(matrix_size,
matrix_size);</i><i class=""><br class="">
</i><i class=""> local_zero_mat =
arma::zeros<arma::cx_mat>(matrix_size,
matrix_size);</i><i class=""><br class="">
</i><i class=""> arma::cx_mat tmp_mat = <a href="http://local_mat.st/" target="_blank" moz-do-not-send="true" class="">local_mat.st</a>();</i><i class=""><br class="">
</i><i class=""> MatSetValues(C, matrix_size,
indices.memptr(), matrix_size, indices.memptr(),
tmp_mat.memptr(), INSERT_VALUES);</i><i class=""><br class="">
</i><i class=""> MatSetValues(F, matrix_size,
indices.memptr(), matrix_size, indices.memptr(),
local_zero_mat.memptr(), INSERT_VALUES);</i><i class=""><br class="">
</i><i class=""> }</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> MatAssemblyBegin(C, MAT_FINAL_ASSEMBLY);</i><i class=""><br class="">
</i><i class=""> MatAssemblyEnd(C, MAT_FINAL_ASSEMBLY);</i><i class=""><br class="">
</i><i class=""> MatAssemblyBegin(F, MAT_FINAL_ASSEMBLY);</i><i class=""><br class="">
</i><i class=""> MatAssemblyEnd(F, MAT_FINAL_ASSEMBLY);</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> //FFT test</i><i class=""><br class="">
</i><i class=""> Mat FFT_A;</i><i class=""><br class="">
</i><i class=""> Vec input, output;</i><i class=""><br class="">
</i><i class=""> int first_owned_row_index = 0,
last_owned_row_index = 0;</i><i class=""><br class="">
</i><i class=""> const int FFT_length[] = {matrix_size};</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> MatCreateFFT(PETSC_COMM_WORLD, 1, FFT_length,
MATFFTW, &FFT_A);</i><i class=""><br class="">
</i><i class=""> MatCreateVecsFFTW(FFT_A, &x, &y,
&z);</i><i class=""><br class="">
</i><i class=""> VecCreate(PETSC_COMM_WORLD, &input);</i><i class=""><br class="">
</i><i class=""> VecSetFromOptions(input);</i><i class=""><br class="">
</i><i class=""> VecSetSizes(input, PETSC_DECIDE,
matrix_size);</i><i class=""><br class="">
</i><i class=""> VecCreate(PETSC_COMM_WORLD, &output);</i><i class=""><br class="">
</i><i class=""> VecSetFromOptions(output);</i><i class=""><br class="">
</i><i class=""> VecSetSizes(output, PETSC_DECIDE,
matrix_size);</i><i class=""><br class="">
</i><i class=""> MatGetOwnershipRange(C,
&first_owned_row_index,
&last_owned_row_index);</i><i class=""><br class="">
</i><i class=""> std::cout << "Rank " << rank
<< " owns row " << first_owned_row_index
<< " to row " << last_owned_row_index
<< '\n';</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> //Testing FFT</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class="">
/*---------------------------------------------------------*/</i><i class=""><br class="">
</i><i class=""> fftw_plan fplan,bplan;</i><i class=""><br class="">
</i><i class=""> fftw_complex *data_in,*data_out,*data_out2;</i><i class=""><br class="">
</i><i class=""> ptrdiff_t alloc_local, local_ni,
local_i_start, local_n0,local_0_start;</i><i class=""><br class="">
</i><i class=""> PetscRandom rdm;</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> // if (!rank)</i><i class=""><br class="">
</i><i class=""> // printf("Use FFTW without PETSc-FFTW
interface\n");</i><i class=""><br class="">
</i><i class=""> fftw_mpi_init();</i><i class=""><br class="">
</i><i class=""> int N = matrix_size * matrix_size;</i><i class=""><br class="">
</i><i class=""> int N0 = matrix_size;</i><i class=""><br class="">
</i><i class=""> int N1 = matrix_size;</i><i class=""><br class="">
</i><i class=""> const ptrdiff_t n_data[] = {N0, 1};</i><i class=""><br class="">
</i><i class=""> //alloc_local =
fftw_mpi_local_size_2d(N0,N1,PETSC_COMM_WORLD,&local_n0,&local_0_start);</i><i class=""><br class="">
</i><i class=""> alloc_local = fftw_mpi_local_size_many(1,
n_data,</i><i class=""><br class="">
</i><i class="">
matrix_size,</i><i class=""><br class="">
</i><i class="">
FFTW_MPI_DEFAULT_BLOCK,</i><i class=""><br class="">
</i><i class="">
PETSC_COMM_WORLD,</i><i class=""><br class="">
</i><i class="">
&local_n0,</i><i class=""><br class="">
</i><i class="">
&local_0_start);</i><i class=""><br class="">
</i><i class=""> //data_in =
(fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local);</i><i class=""><br class="">
</i><i class=""> PetscScalar *C_ptr, *F_ptr;</i><i class=""><br class="">
</i><i class=""> MatDenseGetArray(C, &C_ptr);</i><i class=""><br class="">
</i><i class=""> MatDenseGetArray(F, &F_ptr);</i><i class=""><br class="">
</i><i class=""> data_in =
reinterpret_cast<fftw_complex*>(C_ptr);</i><i class=""><br class="">
</i><i class=""> data_out =
reinterpret_cast<fftw_complex*>(F_ptr);</i><i class=""><br class="">
</i><i class=""> data_out2 =
(fftw_complex*)fftw_malloc(sizeof(fftw_complex)*alloc_local);</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class="">
VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0
* N1,(PetscInt)N,(const PetscScalar*)data_in,&x);</i><i class=""><br class="">
</i><i class=""> PetscObjectSetName((PetscObject) x, "Real
Space vector");</i><i class=""><br class="">
</i><i class="">
VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0
* N1,(PetscInt)N,(const PetscScalar*)data_out,&y);</i><i class=""><br class="">
</i><i class=""> PetscObjectSetName((PetscObject) y,
"Frequency space vector");</i><i class=""><br class="">
</i><i class="">
VecCreateMPIWithArray(PETSC_COMM_WORLD,1,(PetscInt)local_n0
* N1,(PetscInt)N,(const
PetscScalar*)data_out2,&z);</i><i class=""><br class="">
</i><i class=""> PetscObjectSetName((PetscObject) z,
"Reconstructed vector");</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> int FFT_rank = 1;</i><i class=""><br class="">
</i><i class=""> const ptrdiff_t FFTW_size[] = {matrix_size};</i><i class=""><br class="">
</i><i class=""> int howmany = last_owned_row_index -
first_owned_row_index;</i><i class=""><br class="">
</i><i class=""> //std::cout << "Rank " << rank
<< " processes " << howmany << "
rows\n";</i><i class=""><br class="">
</i><i class=""> int idist = matrix_size;//1;</i><i class=""><br class="">
</i><i class=""> int odist = matrix_size;//1;</i><i class=""><br class="">
</i><i class=""> int istride = 1;//matrix_size;</i><i class=""><br class="">
</i><i class=""> int ostride = 1;//matrix_size;</i><i class=""><br class="">
</i><i class=""> const ptrdiff_t *inembed = FFTW_size,
*onembed = FFTW_size;</i><i class=""><br class="">
</i><i class=""> fplan = fftw_mpi_plan_many_dft(FFT_rank,
FFTW_size,</i><i class=""><br class="">
</i><i class=""> howmany,</i><i class=""><br class="">
</i><i class="">
FFTW_MPI_DEFAULT_BLOCK, FFTW_MPI_DEFAULT_BLOCK,</i><i class=""><br class="">
</i><i class=""> data_in,
data_out,</i><i class=""><br class="">
</i><i class="">
PETSC_COMM_WORLD,</i><i class=""><br class="">
</i><i class=""> FFTW_FORWARD,
FFTW_ESTIMATE);</i><i class=""><br class="">
</i><i class=""> bplan = fftw_mpi_plan_many_dft(FFT_rank,
FFTW_size,</i><i class=""><br class="">
</i><i class=""> howmany,</i><i class=""><br class="">
</i><i class="">
FFTW_MPI_DEFAULT_BLOCK, FFTW_MPI_DEFAULT_BLOCK,</i><i class=""><br class="">
</i><i class=""> data_out,
data_out2,</i><i class=""><br class="">
</i><i class="">
PETSC_COMM_WORLD,</i><i class=""><br class="">
</i><i class=""> FFTW_BACKWARD,
FFTW_ESTIMATE);</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> if (false)
{VecView(x,PETSC_VIEWER_STDOUT_WORLD);}</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> fftw_execute(fplan);</i><i class=""><br class="">
</i><i class=""> if (false)
{VecView(y,PETSC_VIEWER_STDOUT_WORLD);}</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> fftw_execute(bplan);</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> double a = 1.0 / matrix_size;</i><i class=""><br class="">
</i><i class=""> double enorm = 0;</i><i class=""><br class="">
</i><i class=""> VecScale(z,a);</i><i class=""><br class="">
</i><i class=""> if (false) {VecView(z,
PETSC_VIEWER_STDOUT_WORLD);}</i><i class=""><br class="">
</i><i class=""> VecAXPY(z,-1.0,x);</i><i class=""><br class="">
</i><i class=""> VecNorm(z,NORM_1,&enorm);</i><i class=""><br class="">
</i><i class=""> if (enorm > 1.e-11) {</i><i class=""><br class="">
</i><i class=""> PetscPrintf(PETSC_COMM_SELF," Error norm
of |x - z| %g\n",(double)enorm);</i><i class=""><br class="">
</i><i class=""> }</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> /* Free spaces */</i><i class=""><br class="">
</i><i class=""> fftw_destroy_plan(fplan);</i><i class=""><br class="">
</i><i class=""> fftw_destroy_plan(bplan);</i><i class=""><br class="">
</i><i class=""> fftw_free(data_out2);</i><i class=""><br class="">
</i><i class=""><br class="">
</i><i class=""> //Generate test matrix for comparison</i><i class=""><br class="">
</i><i class=""> arma::cx_mat fft_test_mat = local_mat;</i><i class=""><br class="">
</i><i class=""> fft_test_mat.each_row([&](arma::cx_rowvec
&a){</i><i class=""><br class="">
</i><i class=""> a = arma::fft(a);</i><i class=""><br class="">
</i><i class=""> });</i><i class=""><br class="">
</i><i class=""> std::cout <<
"-----------------------------------------------------\n";</i><i class=""><br class="">
</i><i class=""> std::cout << "Input matrix:\n" <<
local_mat << '\n';</i><i class=""><br class="">
</i><i class=""> MatView(C, viewer);</i><i class=""><br class="">
</i><i class=""> std::cout <<
"-----------------------------------------------------\n";</i><i class=""><br class="">
</i><i class=""> std::cout << "Expected output
matrix:\n" << fft_test_mat << '\n';</i><i class=""><br class="">
</i><i class=""> MatView(F, viewer);</i><i class=""><br class="">
</i><i class=""> std::cout <<
"-----------------------------------------------------\n";</i><i class=""><br class="">
</i><i class=""> MatDestroy(&FFT_A);</i><i class=""><br class="">
</i><i class=""> VecDestroy(&input);</i><i class=""><br class="">
</i><i class=""> VecDestroy(&output);</i><i class=""><br class="">
</i><i class=""> VecDestroy(&x);</i><i class=""><br class="">
</i><i class=""> VecDestroy(&y);</i><i class=""><br class="">
</i><i class=""> VecDestroy(&z);</i><i class=""><br class="">
</i><i class=""> MatDestroy(&C);</i><i class=""><br class="">
</i><i class=""> MatDestroy(&F);</i><i class=""><br class="">
</i><i class=""> PetscViewerDestroy(&viewer);</i><i class=""><br class="">
</i><i class=""> PetscFinalize();</i><i class=""><br class="">
</i><i class=""> return 0;</i><i class=""><br class="">
</i><i class="">}</i></p><p class="">For <b class="">mpirun -n 1</b> I get the expected output (i.e.
armadillo and PETSc/FFTW return the same result), but
for <b class="">mpirun -n x</b> with x > 1 every value which
is not assigned to rank 0 is lost and set to zero
instead. Every value assigned to rank 0 is calculated
correctly, as far as I can see. Did I forget something
here?</p>
</div>
</blockquote>
<div class="">I do not understand why your FFTW calls use the WORLD
communicator. Aren't they serial FFTs over the local rows?</div>
<div class=""><br class="">
</div>
<div class=""> THanks,</div>
<div class=""><br class="">
</div>
<div class=""> Matt</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div class=""><p class="">Thanks,</p><p class="">Roland<br class="">
</p>
<div class="">Am 05.12.20 um 01:59 schrieb Barry Smith:<br class="">
</div>
<blockquote type="cite" class="">
<div class=""><br class="">
</div>
Roland,
<div class=""><br class="">
</div>
<div class=""> If you store your matrix as described in a
parallel PETSc dense matrix then you should be able to
call </div>
<div class=""><br class="">
</div>
<div class="">fftw_plan_many_dft() directly on the value obtained
with MatDenseGetArray(). You just need to pass the
arguments regarding column major ordering
appropriately. Probably identically to what you do
with your previous code.</div>
<div class=""><br class="">
</div>
<div class=""> Barry</div>
<div class=""><br class="">
<div class=""><br class="">
<blockquote type="cite" class="">
<div class="">On Dec 4, 2020, at 6:47 AM, Roland Richter
<<a href="mailto:roland.richter@ntnu.no" target="_blank" moz-do-not-send="true" class="">roland.richter@ntnu.no</a>>
wrote:</div>
<br class="">
<div class="">
<div class=""><p class="">Ideally those FFTs could be handled in
parallel, after they are not depending on
each other. Is that possible with MatFFT, or
should I rather use FFTW for that?</p><p class="">Thanks,</p><p class="">Roland<br class="">
</p>
<div class="">Am 04.12.20 um 13:19 schrieb Matthew
Knepley:<br class="">
</div>
<blockquote type="cite" class="">
<div dir="ltr" class="">
<div dir="ltr" class="">On Fri, Dec 4, 2020 at 5:32
AM Roland Richter <<a href="mailto:roland.richter@ntnu.no" target="_blank" moz-do-not-send="true" class="">roland.richter@ntnu.no</a>>
wrote:<br class="">
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">Hei,<br class="">
<br class="">
I am currently working on a problem
which requires a large amount of<br class="">
transformations of a field E(r, t)
from time space to Fourier space E(r,<br class="">
w) and back. The field is described in
a 2d-matrix, with the r-dimension<br class="">
along the columns and the t-dimension
along the rows.<br class="">
<br class="">
For the transformation from time to
frequency space and back I therefore<br class="">
have to apply a 1d-FFT operation over
each row of my matrix. For my<br class="">
earlier attempts I used armadillo as
matrix library and FFTW for doing<br class="">
the transformations. Here I could use
fftw_plan_many_dft to do all FFTs<br class="">
at the same time. Unfortunately,
armadillo does not support MPI, and<br class="">
therefore I had to switch to PETSc for
larger matrices.<br class="">
<br class="">
Based on the examples (such as example
143) PETSc has a way of doing<br class="">
FFTs internally by creating an FFT
object (using MatCreateFFT).<br class="">
Unfortunately, I can not see how I
could use that object to conduct the<br class="">
operation described above without
having to iterate over each row in my<br class="">
original matrix (i.e. doing it
sequential, not in parallel).<br class="">
<br class="">
Ideally I could distribute the FFTs
such over my nodes that each node<br class="">
takes several rows of the original
matrix and applies the FFT to each of<br class="">
them. As example, for a matrix with a
size of 4x4 and two nodes node 0<br class="">
would take row 0 and 1, while node 1
takes row 2 and 3, to avoid<br class="">
unnecessary memory transfer between
the nodes while conducting the FFTs.<br class="">
Is that something PETSc can do, too?<br class="">
</blockquote>
<div class=""><br class="">
</div>
<div class="">The way I understand our setup (I
did not write it), we use
plan_many_dft to handle</div>
<div class="">multiple dof FFTs, but these would
be interlaced. You want many FFTs for
non-interlaced</div>
<div class="">storage, which is not something we
do right now. You could definitely
call FFTW directly</div>
<div class="">if you want.</div>
<div class=""><br class="">
</div>
<div class="">Second, above it seems like you
just want serial FFTs. You can
definitely create a MatFFT</div>
<div class="">with PETSC_COMM_SELF, and apply it
to each row in the local rows, or
create the plan</div>
<div class="">yourself for the stack of rows.</div>
<div class=""><br class="">
</div>
<div class=""> Thanks,</div>
<div class=""><br class="">
</div>
<div class=""> Matt</div>
<div class=""> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
Thanks!<br class="">
<br class="">
Regards,<br class="">
<br class="">
Roland<br class="">
<br class="">
</blockquote>
</div>
<br clear="all" class="">
<div class=""><br class="">
</div>
-- <br class="">
<div dir="ltr" class="">
<div dir="ltr" class="">
<div class="">
<div dir="ltr" class="">
<div class="">
<div dir="ltr" class="">
<div class="">What most experimenters
take for granted before they
begin their experiments is
infinitely more interesting
than any results to which
their experiments lead.<br class="">
-- Norbert Wiener</div>
<div class=""><br class="">
</div>
<div class=""><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank" moz-do-not-send="true" class="">https://www.cse.buffalo.edu/~knepley/</a><br class="">
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
</div>
<br class="">
</div>
</blockquote>
</div>
</blockquote>
</div>
<br clear="all" class="">
<div class=""><br class="">
</div>
-- <br class="">
<div dir="ltr" class="gmail_signature">
<div dir="ltr" class="">
<div class="">
<div dir="ltr" class="">
<div class="">
<div dir="ltr" class="">
<div class="">What most experimenters take for granted before
they begin their experiments is infinitely more
interesting than any results to which their
experiments lead.<br class="">
-- Norbert Wiener</div>
<div class=""><br class="">
</div>
<div class=""><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank" moz-do-not-send="true" class="">https://www.cse.buffalo.edu/~knepley/</a><br class="">
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div></blockquote></div><br class=""></div></body></html>