[petsc-users] AMS parallel usage

Barry Smith bsmith at mcs.anl.gov
Mon Aug 6 22:28:18 CDT 2012


On Aug 6, 2012, at 10:17 PM, "Gong Ding" <gdiso at ustc.edu> wrote:

> Thanks, Barry. It is really a wonderful tool.

    Thanks

> PS: The C STRING support of AMS is a bit strange. At very beginning, I can't understand the mechanism of STRING. So I added a CHAR data type to AMS for an alternate of STRING.   
> 
> Hope the AMS manual have some more detailed description to parallel usage and data types. 

     Maybe you can help write it. We really don't have any one maintaining/improving the AMS which is a pity. You can access the repository with 

     hg clone http://petsc.cs.iit.edu/petsc/ams-dev


   Barry

> 
> 
>> 
>> 
>> On Aug 6, 2012, at 9:36 AM, "Gong Ding" <gdiso at ustc.edu> wrote:
>> 
>> 
>> 
>>> Hi,
>> 
>>> I am trying to use AMS as data communicator between computational code and user monitor. The AMS works well for serial situation, however, I don't know how to use it for parallel environment. The manual has little information, too.
>> 
>> 
>> 
>>   You need to build AMS with the same MPI  that you will use in the parallel program.
>> 
>> 
>> 
>>> 
>> 
>>> I'd like to know some thing about data synchronization. 
>> 
>>> Does AMS process data synchronization in MPI environment automatically? Should I call AMS_Comm_publish on every MPI processor, and create memory with AMS_Memory_create for my variables on each processor?
>> 
>> 
>> 
>>   There are different types of variables you can publish.
>> 
>> 
>> 
>>  AMS_Comm_publish() takes an argument AMS_Comm_type which would be MPI_TYPE like in your case.
>> 
>> 
>> 
>>   AMS_Memory_add_field() takes an argument AMS_Shared_type  which can be AMS_COMMON, AMS_DISTRIBUTED or AMS_REDUCED  and if it is reduced the AMS_Reduction_type is AMS_SUM, AMS_MAX, AMS_MIN, or AMS_NONE.  See the manual page for AMS_Memory_add_field() .
>> 
>> 
>> 
>>   If AMS_Shared_type is AMS_COMMON then AMS assumes that the values in those locations are identical on all processes (and it brings over to the accessor one set of those values), if distributed then each process has an array of different values (and it brings over all those values, in a long array, on the accessor) if reduce that it brings back the values after applying the reduction operator across all the processes; so with sum it beings back the sum of the values over all the processes.
>> 
>> 
>> 
>>> 
>> 
>>> If so, the accessor will see how many TCP/IP port? Or accessor only communicate with MPI processor 0 and processor 0 will broadcast the information to all the others?
>> 
>> 
>> 
>>   AMS completely manages the fact there are multiple "publishers", the accessor transparently handles getting stuff from all the publisher nodes.
>> 
>> 
>> 
>>> 
>> 
>>> If not, should I only create AMS object on processor 0, and processor 0 should broadcast what AMS get?
>> 
>> 
>> 
>>    Nope, you do not need to do this.
>> 
>> 
>> 
>>   Barry
>> 
>> 
>> 
>>> 
>> 
>>> Gong Ding
>> 
>>> 
>> 
>>> 
>> 
>>> 
>> 
>>> 
>> 
>>> 
>> 
>>> 
>> 
>>> 
>> 
>> 
>> 
>> 



More information about the petsc-users mailing list