>
See section "CUSTOMIZING LOGFILES" in mpich2-xxx/src/mpe2/README.<br>Correct me if I am wrong : <br>Since I am dealing with PERUSE events, whenever such an event occurs, a PERUSE function, defined in <mpich-dir>/src/peruse/peruse.c, is invoked by the MPI library. I am trying to get this event displayed in the jumpshot output. For this to be done, I need to define a wrapper function which gets invoked when a PERUSE event occurs, to log the event and then to call the actuall peruse function, which is similar to the way the wrapper function at log_mpi_core.c is called, when MPI_Init is called. <br>
<br>Could you please clarify on the dynamic mapping? <br>I took a look at the documentation at src/util/multichannel/mpi.c. I think, I understood what is going on in LoadFunctions() and the way the function pointers are assigned addresses depending the dll that is being used. <br>
<br>Krishna Chaitanya K<br><br><div class="gmail_quote">On Sun, Mar 23, 2008 at 12:59 PM, Anthony Chan <<a href="mailto:chan@mcs.anl.gov">chan@mcs.anl.gov</a>> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
See section "CUSTOMIZING LOGFILES" in mpich2-xxx/src/mpe2/README.<br>
You don't need to modify MPE libraries.<br>
<br>
A.Chan<br>
<div><div></div><div class="Wj3C7c"><br>
On Sun, 23 Mar 2008, Krishna Chaitanya wrote:<br>
<br>
> I have modified the mpe library to log the events that I am interested in<br>
> monitoring. But, I am bit hazy about how a function like MPI_Init is<br>
> actually linked to the MPI_Init routine in the file log_mpi_core.c when we<br>
> compile the MPI application with the -mpe=mpilog switch. Could someone point<br>
> me to the routine that takes care of such a mapping?<br>
><br>
> Thanks,<br>
> Krishna Chaitanya K<br>
><br>
> On Sat, Mar 22, 2008 at 3:01 AM, Krishna Chaitanya <<a href="mailto:kris.c1986@gmail.com">kris.c1986@gmail.com</a>><br>
> wrote:<br>
><br>
>> Thanks a lot. I installed the latest jdk version and I am now able to look<br>
>> at the jumpshot output.<br>
>><br>
>> Krishna Chaitanya K<br>
>><br>
>><br>
>> On Sat, Mar 22, 2008 at 1:45 AM, Anthony Chan <<a href="mailto:chan@mcs.anl.gov">chan@mcs.anl.gov</a>> wrote:<br>
>><br>
>>><br>
>>> The error that you showed earlier does not suggest the problem is with<br>
>>> running jumpshot on your machine with limited memory. If your clog2<br>
>>> file<br>
>>> isn't too bad, send it to me.<br>
>>><br>
>>> On Fri, 21 Mar 2008, Krishna Chaitanya wrote:<br>
>>><br>
>>>> I resolved that issue.<br>
>>>> My comp ( Intel centrino 32 bit , 256 MB RAM - Dated, I agree) hangs<br>
>>> each<br>
>>>> time I launch jumpshot with the slogfile. Since this is an independent<br>
>>>> project, I am constrained when it comes to the availability of<br>
>>> machines.<br>
>>>> Would you recommend that I give it a try on a 64bit AMD, 512MB RAM? (<br>
>>> Will<br>
>>>> have to start from installing linux on this machine. Is it worth the<br>
>>> effort<br>
>>>> ?) If it requires higher configuration, would you please suggest a<br>
>>> lighter<br>
>>>> graphical tool that I can use to present the occurrence of events and<br>
>>> the<br>
>>>> corresponding times?<br>
>>>><br>
>>>> Thanks,<br>
>>>> Krishna Chaitanya K<br>
>>>><br>
>>>> On Fri, Mar 21, 2008 at 8:23 PM, Anthony Chan <<a href="mailto:chan@mcs.anl.gov">chan@mcs.anl.gov</a>><br>
>>> wrote:<br>
>>>><br>
>>>>><br>
>>>>><br>
>>>>> On Fri, 21 Mar 2008, Krishna Chaitanya wrote:<br>
>>>>><br>
>>>>>><br>
>>>>>> The file block pointer to the Tree Directory is NOT initialized!,<br>
>>> can't<br>
>>>>> read<br>
>>>>>> it.<br>
>>>>>><br>
>>>>><br>
>>>>> That means the slog2 file isn't generated completely. Something went<br>
>>>>> wrong in the convertion process (assuming your clog2 file is<br>
>>> complete).<br>
>>>>> If your MPI program doesn't finish MPI_Finalize normally, your clog2<br>
>>>>> file will be incomplete.<br>
>>>>><br>
>>>>>><br>
>>>>>> IS there any environment variable that needs to be<br>
>>> initialsed?<br>
>>>>><br>
>>>>> Nothing needs to be initialized by hand.<br>
>>>>><br>
>>>>> A.Chan<br>
>>>>>><br>
>>>>>> Thanks,<br>
>>>>>> Krishna Chaitanya K<br>
>>>>>><br>
>>>>>><br>
>>>>>> On Thu, Mar 20, 2008 at 4:56 PM, Dave Goodell <<a href="mailto:goodell@mcs.anl.gov">goodell@mcs.anl.gov</a>><br>
>>>>> wrote:<br>
>>>>>><br>
>>>>>>> It's pretty hard to debug this issue via email. However, you could<br>
>>>>>>> try running valgrind on your modified MPICH2 to see if any obvious<br>
>>>>>>> bugs pop out. When you do, make sure that you configure with "--<br>
>>>>>>> enable-g=dbg,meminit" in order to avoid spurious warnings and to be<br>
>>>>>>> able to see stack traces.<br>
>>>>>>><br>
>>>>>>> -Dave<br>
>>>>>>><br>
>>>>>>> On Mar 19, 2008, at 1:05 PM, Krishna Chaitanya wrote:<br>
>>>>>>><br>
>>>>>>>> The problem seems to be with the communicator in MPI_Bcast()<br>
>>> (/src/<br>
>>>>>>>> mpi/coll/bcast.c).<br>
>>>>>>>> The comm_ptr is initialized to NULL and after a call to<br>
>>>>>>>> MPID_Comm_get_ptr( comm, comm_ptr ); , the comm_ptr points to the<br>
>>>>>>>> communicator object which was created throught MPI_Init().<br>
>>>>>>>> However, MPID_Comm_valid_ptr( comm_ptr, mpi_errno ) returns with<br>
>>> a<br>
>>>>>>>> value other than MPI_SUCCESS.<br>
>>>>>>>> During some traces, it used to crash at this point itself. On some<br>
>>>>>>>> other traces, it used to go into the progress engine as I<br>
>>> described<br>
>>>>>>>> in my previous mails.<br>
>>>>>>>><br>
>>>>>>>> What could be the reason? Hope someone chips in. I havent been<br>
>>> able<br>
>>>>>>>> to figure this out for sometime now.<br>
>>>>>>>><br>
>>>>>>>> Krishna Chaitanya K<br>
>>>>>>>><br>
>>>>>>>> On Wed, Mar 19, 2008 at 8:44 AM, Krishna Chaitanya<br>
>>>>>>>> <<a href="mailto:kris.c1986@gmail.com">kris.c1986@gmail.com</a>> wrote:<br>
>>>>>>>> This might help :<br>
>>>>>>>><br>
>>>>>>>> In the MPID_Comm structure, I have included the following line for<br>
>>>>>>>> the peruse place-holder :<br>
>>>>>>>> struct mpich_peruse_handle_t** c_peruse_handles;<br>
>>>>>>>><br>
>>>>>>>> And in the function, MPID_Init_thread(), i have the line<br>
>>>>>>>> MPIR_Process.comm_world->c_peruse_handles = NULL;<br>
>>>>>>>> when the rest of the members of the comm_world structure are<br>
>>> being<br>
>>>>>>>> populated.<br>
>>>>>>>><br>
>>>>>>>> Thanks,<br>
>>>>>>>> Krishna Chaitanya K<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> On Wed, Mar 19, 2008 at 8:19 AM, Krishna Chaitanya<br>
>>>>>>>> <<a href="mailto:kris.c1986@gmail.com">kris.c1986@gmail.com</a>> wrote:<br>
>>>>>>>> Thanks for the help. I am facing an weird problem right now. To<br>
>>>>>>>> incorporate the PERUSE component, I have modified the communicator<br>
>>>>>>>> data structure to incude the PERUSE handles. The program executes<br>
>>>>>>>> as expected when compiled without the "mpe=mpilog" flag.When I<br>
>>>>>>>> compile it with the mpe component, the program gives this output :<br>
>>>>>>>><br>
>>>>>>>> Fatal error in MPI_Bcast: Invalid communicator, error stack:<br>
>>>>>>>> MPI_Bcast(784): MPI_Bcast(buf=0x9260f98, count=1, MPI_INT, root=0,<br>
>>>>>>>> MPI_COMM_WORLD) failed<br>
>>>>>>>> MPI_Bcast(717): Invalid communicator<br>
>>>>>>>><br>
>>>>>>>> On tracing further, I understood this :<br>
>>>>>>>> MPI_Init () ( log_mpi_core.c )<br>
>>>>>>>> -- > PMPI_Init ( the communicator object is created here )<br>
>>>>>>>> -- > MPE_Init_log ()<br>
>>>>>>>> -- > CLOG_Local_init()<br>
>>>>>>>> -- > CLOG_Buffer_init4write ()<br>
>>>>>>>> -- > CLOG_Preamble_env_init()<br>
>>>>>>>> -- > MPI_Bcast () (bcast.c)<br>
>>>>>>>> -- > MPIR_Bcast ()<br>
>>>>>>>> -- > MPIC_Recv () /<br>
>>>>>>>> MPIC_Send()<br>
>>>>>>>> -- > MPIC_Wait()<br>
>>>>>>>> < Program crashes ><br>
>>>>>>>> The MPIC_Wait function is invoking the progress engine, which<br>
>>>>>>>> works properly without the mpe component.<br>
>>>>>>>> Even within the progress engine, MPIDU_Sock_wait() and<br>
>>>>>>>> MPIDI_CH3I_Progress_handle_sock_event() are executed a couple of<br>
>>>>>>>> times before the program crashes in the MPIDU_Socki_handle_read()<br>
>>>>>>>> or the MPIDU_Socki_handle_write() functions. ( The read() and the<br>
>>>>>>>> write() functions work two times, I think)<br>
>>>>>>>> I am finding it very hard to reason why the program crashes<br>
>>>>>>>> with mpe. Could you please suggest where I need to look at to sort<br>
>>>>>>>> this issue out?<br>
>>>>>>>><br>
>>>>>>>> Thanks,<br>
>>>>>>>> Krishna Chaitanya K<br>
>>>>>>>><br>
>>>>>>>> On Wed, Mar 19, 2008 at 2:20 AM, Anthony Chan <<a href="mailto:chan@mcs.anl.gov">chan@mcs.anl.gov</a>><br>
>>>>>>>> wrote:<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> On Wed, 19 Mar 2008, Krishna Chaitanya wrote:<br>
>>>>>>>><br>
>>>>>>>>> Hi,<br>
>>>>>>>>> I tried configuring MPICH2 by doing :<br>
>>>>>>>>> ./configure --prefix=/home/kc/mpich-install/ --enable-mpe<br>
>>>>>>>>> --with-logging=SLOG CC=gcc CFLAGS=-g && make && make install<br>
>>>>>>>>> It flashed an error messaage saying :<br>
>>>>>>>>> onfigure: error: ./src/util/logging/SLOG does not exist.<br>
>>>>>>>> Configure aborted<br>
>>>>>>>><br>
>>>>>>>> The --with-logging is for MPICH2's internal logging, not MPE's<br>
>>>>>>>> logging.<br>
>>>>>>>> As what you did below is fine is fine.<br>
>>>>>>>>><br>
>>>>>>>>> After that, I tried :<br>
>>>>>>>>> ./configure --prefix=/home/kc/mpich-install/ --enable-mpe CC=gcc<br>
>>>>>>>> CFLAGS=-g<br>
>>>>>>>>> && make && make install<br>
>>>>>>>>> The installation was normal, when I tried compiling an<br>
>>>>>>>> example<br>
>>>>>>>>> program by doing :<br>
>>>>>>>>> mpicc -mpilog -o sample sample.c<br>
>>>>>>>>> cc1: error: unrecognized command line option "-mpilog"<br>
>>>>>>>><br>
>>>>>>>> Do "mpicc -mpe=mpilog -o sample sample.c" instead. For more<br>
>>> details,<br>
>>>>>>>> see "mpicc -mpe=help" and see mpich2/src/mpe2/README.<br>
>>>>>>>><br>
>>>>>>>> A.Chan<br>
>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> Can anyone please tell me what needs to be done to use<br>
>>>>>>>> the SLOG<br>
>>>>>>>>> logging format?<br>
>>>>>>>>><br>
>>>>>>>>> Thanks,<br>
>>>>>>>>> Krishna Chaitanya K<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> --<br>
>>>>>>>>> In the middle of difficulty, lies opportunity<br>
>>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> --<br>
>>>>>>>> In the middle of difficulty, lies opportunity<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> --<br>
>>>>>>>> In the middle of difficulty, lies opportunity<br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>> --<br>
>>>>>>>> In the middle of difficulty, lies opportunity<br>
>>>>>>><br>
>>>>>>><br>
>>>>>><br>
>>>>>><br>
>>>>>> --<br>
>>>>>> In the middle of difficulty, lies opportunity<br>
>>>>>><br>
>>>>><br>
>>>>><br>
>>>><br>
>>>><br>
>>>> --<br>
>>>> In the middle of difficulty, lies opportunity<br>
>>>><br>
>>><br>
>>><br>
>><br>
>><br>
>> --<br>
>> In the middle of difficulty, lies opportunity<br>
>><br>
><br>
><br>
><br>
> --<br>
> In the middle of difficulty, lies opportunity<br>
><br>
<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>In the middle of difficulty, lies opportunity