0

I have a code... and i'm trying to run on 16 cpus. Problem is..i have an array qith index of 15000...and if i try with 15000 , mpi fails to run. So far... i managed to run with an array of 800. Can i make somehow... the program to work with bigger arrays? I tried to make the datatype to long int .. but aparently mpi_com_rank and mpi_com_size don't accept such datatype. Sorry if i ask something silly... but i could really use the help. Thanks a lot. Here is a sample code:

#include "stdafx.h"
#include "stdio.h"
#include "stdlib.h"
#include "conio.h"
#include "iostream"
#include "mpi.h"
#define size 15000

int main(int argc, char *argv[] ) {

    int numprocs, rank, chunk_size, i;
    int max, mymax,rem;
    int array[size];
    MPI_Status status;

    MPI_Init( &argc,&argv);
    MPI_Comm_rank( MPI_COMM_WORLD, &rank);
    MPI_Comm_size( MPI_COMM_WORLD, &numprocs);

    printf("Hello from process %d of %d \n",rank,numprocs);
    chunk_size = size/numprocs;
        rem = size%numprocs;

    if (rank == 0) {
    /* Initialize Array */
        printf("REM %d \n",rem);
        for(i=0;i<size;i++) {
            array[i] = i;
        }
    /* Distribute Array */
        for(i=1;i<numprocs;i++) {
            if(i<rem ) {
            MPI_Send(&array[i*chunk_size],chunk_size+1, MPI_INT, i, 1, MPI_COMM_WORLD);
            } else {
            MPI_Send(&array[i*chunk_size],chunk_size, MPI_INT, i, 1, MPI_COMM_WORLD);
            }
        }
    }
    else {
        MPI_Recv(array, chunk_size, MPI_INT, 0,1,MPI_COMM_WORLD,&status);
    }
   /*Each processor has a chunk, now find local max */
   mymax = array[0]; 
   for(i=1;i<chunk_size;i++) {
        if(mymax<array[i]) {
            mymax = array[i];
        }
    }
    printf("Array els 1-5 for rank %d: %d %d %d %d %d\n",rank,array[0],array[1],array[2],array[3],array[4]);
    printf("Last 5 Array els for rank %d: %d %d %d %d %d\n",rank,array[chunk_size-5],array[chunk_size-4],array[chunk_size-3],array[chunk_size-2],array[chunk_size-1]);
    printf("The Max for rank %d is: %d\n",rank,mymax);

   /*Send local_max back to master */ 
    if (rank == 0) {
      max = mymax; //Store rank 0 local maximum
        for(i=1;i<numprocs;i++) {
            MPI_Recv(&mymax,1, MPI_INT, MPI_ANY_SOURCE, 1, MPI_COMM_WORLD,&status);
            if(max<mymax) max = mymax;
        }
        printf("The Max is: %d",max);
    }
    else {
        MPI_Send(&mymax, 1, MPI_INT, 0,1,MPI_COMM_WORLD);
    }
    MPI_Finalize();
    std::cin.ignore();
    return 0;
}

I'm compiling the program using visual studio.. that's why i have the iostream library(so i can use cin.ignore..otherwise my console window vanish into thin air ...even if i set it to stay on screen from visualstudio) . In this formula.. i can run on max 5 threads.

-np 5 "$(TargetPath)"

with more than 5.. it fails. if i lower the size(From 15000 to 500 .. i can use 16 threads) -np 16 "$(TargetPath)" Anyone suspect why? Any suggestion is good.

Matt
  • 37
  • 1
  • 4
  • I suspect your running out of ram. My suggestion... put the array in named/shared memory before handing off to other CPUs and pass the name as a shared resource to each execution instance. Also, if each instance is actually a single instance of the code and only running multiple simultanous executions of the same process... I.E. a process that allocates the table then it starts all the other CPU processes, passing them the address of the table – user3629249 Jul 09 '14 at 02:28

1 Answers1

0

You're allocating the arrays to the stack. You need to allocate them to the heap. See this question and answer for a great overview on the stack and heap.

You need to use malloc and free if C or new and delete in C++.

Community
  • 1
  • 1
pyCthon
  • 11,746
  • 20
  • 73
  • 135
  • This should be something like this? int * array= malloc(size * sizeof(int)); ? because im getting some error of void* that can't be alocated to entity of type int* . (i'm not that good with pointers) – Matt Jul 08 '14 at 04:24
  • solve it. int * array = (int*)malloc(size * sizeof(int)); .. i forgot to cast. many thanks. – Matt Jul 08 '14 at 04:33