[MPICH] Re: Problems with allreduce

Rajeev Thakur thakur at mcs.anl.gov
Wed Jan 24 12:55:35 CST 2007


You should see a difference with bigger files, larger number of processes,
and fast file systems (not NFS).
 
Rajeev


  _____  

From: Luiz Mendes [mailto:luizmendesw at gmail.com] 
Sent: Tuesday, January 23, 2007 6:37 PM
To: Rajeev Thakur
Cc: mpich-discuss
Subject: Re: [MPICH] Re: Problems with allreduce


Hi all, Rajeev

Well it is still early in Argonne now, here is 10pm lol

I was testing some reads, and i see that Collective read_all perform slower
times to complete than read only operation. Is this correct? or it would be
notable only in great files? 


Below, it follows the code:

#include <stdio.h>
#include <stdlib.h >
#include <unistd.h>
#include <string.h>
#include <mpi.h>
#include <string.h>
#define FILESIZE 1048576
#define INTS_PER_BLK 16
int main(int argc, char **argv) 
{
    int *buf, rank, nprocs, nints, bufsize;
    double tempo_total, tempo1, tempo2, tempo;
    MPI_File fh;
    MPI_Datatype filetype;
    MPI_Init(&argc,&argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank); 
    MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
    bufsize = FILESIZE/nprocs; 
    buf = (int *) malloc(bufsize);
    nints = bufsize/sizeof(int);
    MPI_File_open(MPI_COMM_WORLD, "datafile", MPI_MODE_RDONLY,
    MPI_INFO_NULL, &fh);
    MPI_Type_vector(nints/INTS_PER_BLK, INTS_PER_BLK,INTS_PER_BLK*nprocs,
MPI_INT, &filetype);
    MPI_Type_commit(&filetype); 
    MPI_File_set_view(fh, INTS_PER_BLK*sizeof(int)*rank, MPI_INT,
    filetype, "native", MPI_INFO_NULL); 
    tempo1=MPI_Wtime();
    MPI_File_read_all(fh, buf, nints, MPI_INT, MPI_STATUS_IGNORE); 
    tempo2=MPI_Wtime();
    tempo=tempo2-tempo1;
     printf("\nPrinting Time of current process--> foi %.10f",tempo);
    MPI_Allreduce(&tempo, &tempo_total, 1, MPI_DOUBLE,
MPI_MAX,MPI_COMM_WORLD); 
    if(rank==0)
    {
        printf("\nThe longest time is %.10f\n",tempo_total);
    }
    MPI_File_close(&fh);
    MPI_Type_free(&filetype);
    free(buf); 
    MPI_Finalize();
    return 0;
}


Thanks.
Luiz.







2007/1/23, Rajeev Thakur <thakur at mcs.anl.gov>: 

Unless dimensao is 1, the Allreduce won't work because the input and output
buffers are single element doubles (not arrays).
 
Rajeev


  _____  

From: Luiz Mendes [mailto:luizmendesw at gmail.com] 
Sent: Tuesday, January 23, 2007 6:10 PM
To: mpich-discuss
Cc: Rajeev Thakur
Subject: Re: [MPICH] Re: Problems with allreduce



Hi all, Rajeev

I correct this, and i made some simplifications, and the problems appears to
be caused by the fact of i add double variable declarations.

It is very strange. Some light to this?

Thanks
Luiz


2007/1/23, Rajeev Thakur <thakur at mcs.anl.gov>: 

Where have you allocated space for arquivo_nome?
 
Rajeev


  _____  

From: owner-mpich-discuss at mcs.anl.gov [mailto:
<mailto:owner-mpich-discuss at mcs.anl.gov> owner-mpich-discuss at mcs.anl.gov] On
Behalf Of Luiz Mendes
Sent: Tuesday, January 23, 2007 5:05 PM
To: mpich-discuss
Subject: [MPICH] Re: Problems with allreduce



Discard variable "tempo", without this continue getting error.

thanks


2007/1/23, Luiz Mendes <luizmendesw at gmail.com>: 

Hi all, i am with a problem in operation MPI_Allreduce. 

The code is the following below: 

int main(int argc, char *argv[])
{
    int opcao, dimensao,nome_tamanho, processo, num_procs;
    int coluna, contador_bytes, contador_bytes_parcial, interacao,
contador_total_bytes; 
    char  *palavra = NULL, *arquivo_nome;
        char *arquivo_pv;
    char procname[MPI_MAX_PROCESSOR_NAME];
    double inicio, termino, tempo_parcial, soma_tempo_total, valor; 
    long long offset;
    MPI_Datatype linhaMatriz;
    MPI_File arquivo;
    MPI_Status status;
    
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &processo); 
    MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
    MPI_Get_processor_name(procname,&nome_tamanho); 
    
    while ((opcao = getopt(argc, argv, "A:D:P:h")) != EOF) 
    {
        switch(opcao) 
        {
            case 'A': 
                arquivo_pv =optarg; 
                sprintf(arquivo_nome,"%s",arquivo_pv);
                break; 
            case 'D': 
                sscanf (optarg, "%d", &dimensao); 
                break;
            case 'P':
                sprintf(arquivo_nome,"pvfs2:/mnt/pvfs2/%s",arquivo_pv);
        }                 
    }
    

    MPI_Type_contiguous(dimensao,MPI_DOUBLE,&linhaMatriz);
    MPI_Type_commit(&linhaMatriz); 
    
    MPI_File_open(MPI_COMM_WORLD, arquivo_nome, MPI_MODE_CREATE |
MPI_MODE_WRONLY, MPI_INFO_NULL, &arquivo);
    
    for(interacao=0; interacao<(dimensao/num_procs);interacao++)
    {
        double matrizlocal[dimensao];
        valor=processo*(dimensao/num_procs)*dimensao+interacao*dimensao; 
        offset=valor*sizeof(double);
        for(coluna=0; coluna<dimensao; coluna++) 
        {
            matrizlocal[coluna]=valor+coluna;
        }
        
        MPI_File_set_view(arquivo, offset, MPI_DOUBLE, linhaMatriz,
"native", MPI_INFO_NULL); 
        inicio=MPI_Wtime();
        MPI_File_write(arquivo, matrizlocal, 1, linhaMatriz, &status); 
        termino=MPI_Wtime();
        tempo_parcial=termino-inicio;
        contador_bytes_parcial+=sizeof(matrizlocal);
        soma_tempo_total+=tempo_parcial;
    }
    MPI_File_close(&arquivo);
    MPI_Allreduce(&soma_tempo_total, &tempo, dimensao, MPI_DOUBLE,
MPI_MAX,MPI_COMM_WORLD);
    MPI_Finalize();    
.......


Do you know what could be happening?

Thanks

Luiz Mendes





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20070124/2938cfc3/attachment.htm>


More information about the mpich-discuss mailing list