[petsc-users] AMS parallel usage

Barry Smith bsmith at mcs.anl.gov
Tue Aug 7 20:18:58 CDT 2012


  I'll take a look at this but it may take a little time. It could be some of the examples are out of date.


   Barry

On Aug 7, 2012, at 5:25 AM, Gong Ding <gdiso at ustc.edu> wrote:

> Some more bugs about AMS_Comm_publish for parallel situation. even the AMS mpi example can not work. 
> 
> 1) parameter mismatch
> asmpub.c line 88
> mysetport = *(int *) va_arg(args, int *);
> It seems author wants to add another argument as int * to AMS_Comm_publish function for parallel situation. which not appears in the manual. 
> I just comment line 88 and line 130-131, 
> if (mysetport != -1) {
> 		AMS_port = mysetport;
> 	}
> 
> 2) extra check of port number
> asmpub.c line 149
>        /* Create publisher arguments */
>        err = AMSP_New_Pub_Arguments(AMS_port, &pub_arg);
>        CheckErr(err);  
> 
> AMSP_New_Pub_Arguments function will check the value of AMS_port. A port number of -1 or 0 will return a error.
> 
> 
> 3) AMSP_start_publisher_func seems require a port number -1 to force socket bind function to allocate a new port, which conflict with AMSP_New_Pub_Arguments check.
> 
> nettcppub.c line 66
> if (*port == -1)
>     *port = 0;
> at line 94, a bind function is called.
> 
> 
> I have to set pub_arg->port = -1 as follows. 
> 
> if(ctype == MPI_TYPE)
> {
>            pub_arg->care_about_port = 0;
>            pub_arg->port = -1;
> }
> else
>            pub_arg->care_about_port = 1;
> 
> 
> Barry, will you please check it?
> 
> 
>> 
>> 
>> On Aug 6, 2012, at 10:17 PM, "Gong Ding" <gdiso at ustc.edu> wrote:
>> 
>> 
>> 
>>> Thanks, Barry. It is really a wonderful tool.
>> 
>> 
>> 
>>    Thanks
>> 
>> 
>> 
>>> PS: The C STRING support of AMS is a bit strange. At very beginning, I can't understand the mechanism of STRING. So I added a CHAR data type to AMS for an alternate of STRING.   
>> 
>>> 
>> 
>>> Hope the AMS manual have some more detailed description to parallel usage and data types. 
>> 
>> 
>> 
>>     Maybe you can help write it. We really don't have any one maintaining/improving the AMS which is a pity. You can access the repository with 
>> 
>> 
>> 
>>     hg clone http://petsc.cs.iit.edu/petsc/ams-dev
>> 
>> 
>> 
>> 
>> 
>>   Barry
>> 
>> 
>> 
>>> 
>> 
>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>> On Aug 6, 2012, at 9:36 AM, "Gong Ding" <gdiso at ustc.edu> wrote:
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>> Hi,
>> 
>>>> 
>> 
>>>>> I am trying to use AMS as data communicator between computational code and user monitor. The AMS works well for serial situation, however, I don't know how to use it for parallel environment. The manual has little information, too.
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>  You need to build AMS with the same MPI  that you will use in the parallel program.
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>> 
>> 
>>>> 
>> 
>>>>> I'd like to know some thing about data synchronization. 
>> 
>>>> 
>> 
>>>>> Does AMS process data synchronization in MPI environment automatically? Should I call AMS_Comm_publish on every MPI processor, and create memory with AMS_Memory_create for my variables on each processor?
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>  There are different types of variables you can publish.
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>> AMS_Comm_publish() takes an argument AMS_Comm_type which would be MPI_TYPE like in your case.
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>  AMS_Memory_add_field() takes an argument AMS_Shared_type  which can be AMS_COMMON, AMS_DISTRIBUTED or AMS_REDUCED  and if it is reduced the AMS_Reduction_type is AMS_SUM, AMS_MAX, AMS_MIN, or AMS_NONE.  See the manual page for AMS_Memory_add_field() .
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>  If AMS_Shared_type is AMS_COMMON then AMS assumes that the values in those locations are identical on all processes (and it brings over to the accessor one set of those values), if distributed then each process has an array of different values (and it brings over all those values, in a long array, on the accessor) if reduce that it brings back the values after applying the reduction operator across all the processes; so with sum it beings back the sum of the values over all the processes.
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>> 
>> 
>>>> 
>> 
>>>>> If so, the accessor will see how many TCP/IP port? Or accessor only communicate with MPI processor 0 and processor 0 will broadcast the information to all the others?
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>  AMS completely manages the fact there are multiple "publishers", the accessor transparently handles getting stuff from all the publisher nodes.
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>> 
>> 
>>>> 
>> 
>>>>> If not, should I only create AMS object on processor 0, and processor 0 should broadcast what AMS get?
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>   Nope, you do not need to do this.
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>  Barry
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>>> 
>> 
>>>> 
>> 
>>>>> Gong Ding
>> 
>>>> 
>> 
>>>>> 
>> 
>>>> 
>> 
>>>>> 
>> 
>>>> 
>> 
>>>>> 
>> 
>>>> 
>> 
>>>>> 
>> 
>>>> 
>> 
>>>>> 
>> 
>>>> 
>> 
>>>>> 
>> 
>>>> 
>> 
>>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>>>> 
>> 
>> 
>> 
>> 



More information about the petsc-users mailing list