[petsc-users] (no subject)

Jed Brown jedbrown at mcs.anl.gov
Sat Dec 29 23:32:04 CST 2012


On Sat, Dec 29, 2012 at 11:15 PM, amlan barua <abarua at iit.edu> wrote:

> Hi,
> I am actually trying to implement a 'parallel' ordinary differential
> equation solver.
> For proper functioning of the algorithm (the name is parareal), I need to
> implement a simple recurrence relation of the form x[i+1] = f(x[i]), f
> known, depends on quadrature one would like to use.
> What is the best way to implement a sequential operation on a parallel
> structure?
> Somehow I need to keep all but one process idle which.
>

It's not very parallel when all but one process is idle. ;-D


> So I wrote a loop over all processes and within the loop I forced only one
> process to update its part. Say  when j=0 ideally only the processor 0
> should work. But others will update their local j value before 0th
> processor much faster. Thus I am concerned about the safety of the
> operation. Will it be okay if I modify my code as following, please advice?
>
>

Yes, this works, but the performance won't be great when using
DMGlobalToLocal despite only really wanting to send the update to the next
process in the sequence. (Maybe you don't care about performance yet, or
this particular part won't be performance-sensitive.)


>  PetscInitialize(&argc,&argv,(char *)0,help);
>     ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRQ(ierr);
>     ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
>     ierr =
> DMDACreate1d(PETSC_COMM_WORLD,DMDA_BOUNDARY_NONE,size*5,1,1,PETSC_NULL,&da);
> CHKERRQ(ierr);
>     ierr = DMCreateGlobalVector(da,&vec); CHKERRQ(ierr);
>     ierr = VecSet(vec,1.00);
>     ierr = DMCreateLocalVector(da,&local);
>     ierr = DMDAGetLocalInfo(da,&info);
>     temp = 1;
>     for (j=0;j<size;j++) {
>        ierr = DMGlobalToLocalBegin(da,vec,INSERT_VALUES,local);
> CHKERRQ(ierr);
>        ierr = DMGlobalToLocalEnd(da,vec,INSERT_VALUES,local);
> CHKERRQ(ierr);
>        ierr = DMDAVecGetArray(da,vec,&arr);
>        ierr = DMDAVecGetArray(da,local,&array);
>        if (rank==j) {
>        for (i=info.xs;i<info.xs+info.xm;i++) {
>           if ((!i)==0) {
>

I would write if (i) unless I was intentionally trying to confuse the
reader.


>             array[i] = array[i] + array[i-1];
>             arr[i] = array[i];
>             }
>           }
>        }
>     ierr = DMDAVecRestoreArray(da,local,&array);
>     ierr = DMDAVecRestoreArray(da,vec,&arr);
>    }
>     ierr = VecView(vec,PETSC_VIEWER_STDOUT_WORLD);
>     PetscFinalize();
>     return 0;
> Amlan
>
>
> On Sat, Dec 29, 2012 at 10:19 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>
>> On Sat, Dec 29, 2012 at 1:40 AM, amlan barua <abarua at iit.edu> wrote:
>>
>>> Hi Barry,
>>> I wrote the following piece according to your suggestions. Currently it
>>> does nothing but creates a vector with 1 at 1th position, 2 at 2th and so
>>> on. But I made it serial, i.e. (n+1)th place is computed using the value of
>>> nth place. My question, did I do it correctly, i.e. is it safe or results
>>> may change depending on problem size? This is much faster than
>>> VecSetValues, I believe the communication is minimum here because I take
>>> the advantage of ghost points.
>>> Amlan
>>>
>>> PetscInitialize(&argc,&argv,(char *)0,help);
>>>     ierr = MPI_Comm_size(PETSC_COMM_WORLD, &size); CHKERRQ(ierr);
>>>     ierr = MPI_Comm_rank(PETSC_COMM_WORLD, &rank); CHKERRQ(ierr);
>>>     ierr =
>>> DMDACreate1d(PETSC_COMM_WORLD,DMDA_BOUNDARY_NONE,size*5,1,1,PETSC_NULL,&da);
>>> CHKERRQ(ierr);
>>>     ierr = DMCreateGlobalVector(da,&vec); CHKERRQ(ierr);
>>>     ierr = VecSet(vec,1.00);
>>>     ierr = DMCreateLocalVector(da,&local);
>>>     ierr = DMDAGetLocalInfo(da,&info);
>>>     ierr = DMDAVecGetArray(da,vec,&arr);
>>>     ierr = DMDAVecGetArray(da,local,&array);
>>>     temp = 1;
>>>     for (j=0;j<size;j++) {
>>>
>>
>> This is needlessly sequential (or should be).
>>
>>
>>>        ierr = DMGlobalToLocalBegin(da,vec,INSERT_VALUES,local);
>>> CHKERRQ(ierr);
>>>        ierr = DMGlobalToLocalEnd(da,vec,INSERT_VALUES,local);
>>> CHKERRQ(ierr);
>>>
>>
>> You should never use a communication routine while you have access to the
>> array (*VecGetArray()).
>>
>>
>>>        if (rank==j) {
>>>        for (i=info.xs;i<info.xs+info.xm;i++) {
>>>           if ((!i)==0) {
>>>             array[i] = array[i] + array[i-1];
>>>
>>
>> What sort of recurrence do you actually want to implement. When possible,
>> it's much better to reorganize so that you can do local work followed by an
>> MPI_Scan followed by more local work. MPI_Scan is fast (logarithmic).
>>
>>
>>>             arr[i] = array[i];
>>>             }
>>>           }
>>>        }
>>>        }
>>>     ierr = DMDAVecRestoreArray(da,local,&array);
>>>     ierr = DMDAVecRestoreArray(da,vec,&arr);
>>>     ierr = VecView(vec,PETSC_VIEWER_STDOUT_WORLD);
>>>     PetscFinalize();
>>>     return 0;
>>>
>>>
>>>
>>> On Thu, Dec 27, 2012 at 10:40 AM, Barry Smith <bsmith at mcs.anl.gov>wrote:
>>>
>>>>
>>>> On Dec 27, 2012, at 10:34 AM, amlan barua <abarua at iit.edu> wrote:
>>>>
>>>> > I think I can use VecSetValues, is that right?
>>>>
>>>>    Yes you could do that. But since you are using a DMDA you could also
>>>> use DMGetLocalVector(), DMGlobalToLocalBegin/End() followed by
>>>> DMDAVecGetArray() to access the ghost values.
>>>>
>>>>    Barry
>>>>
>>>> > Amlan
>>>> >
>>>> >
>>>> > On Thu, Dec 27, 2012 at 9:04 AM, amlan barua <abarua at iit.edu> wrote:
>>>> > Hi Barry,
>>>> > Is this scattering a very costly operation? I have to compute x[i] =
>>>> f(x[i-1]) where f is known. Since this operation is strictly sequential, I
>>>> thought of gathering the entire vector on processor 0, do the sequential
>>>> operation there and scatter the result back. However this is unnecessary
>>>> because I only need the bordering x[i] values. What can be a better way?
>>>> > Amlan
>>>> >
>>>> >
>>>> > On Thu, Dec 27, 2012 at 8:18 AM, Barry Smith <bsmith at mcs.anl.gov>
>>>> wrote:
>>>> >
>>>> >   ierr = DMDACreateNaturalVector(da,&natural);CHKERRQ(ierr);
>>>> >     ierr =
>>>> DMDAGlobalToNaturalBegin(da,xin,INSERT_VALUES,natural);CHKERRQ(ierr);
>>>> >     ierr =
>>>> DMDAGlobalToNaturalEnd(da,xin,INSERT_VALUES,natural);CHKERRQ(ierr);
>>>> >
>>>> > Now do VecScatterCreateToZero() from natural and the vector will be
>>>> in the natural ordering on process zero with the dof interlaced.
>>>> >
>>>> >
>>>> >    Barry
>>>> >
>>>> > On Dec 27, 2012, at 12:22 AM, amlan barua <abarua at iit.edu> wrote:
>>>> >
>>>> > > Hi,
>>>> > > Is there an analogue of VecScatterCreateToZero for DA vectors? The
>>>> DMDA object has more than one degrees of freedom.
>>>> > > If there isn't any, should I use an IS object to do the scattering?
>>>> > > Amlan
>>>> >
>>>> >
>>>> >
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121229/88a1bb63/attachment.html>


More information about the petsc-users mailing list