Questions tagged [mpi-io]

MPI/IO provides high performance, portable, parallel I/O interface to high performance, portable, parallel MPI programs

The purpose of MPI/IO is to provide high performance, portable, parallel I/O interface to high performance, portable, parallel MPI programs. Parallel I/O is not a daily bread. Although some supercomputer systems in the past offered parallel disk subsystems, e.g., the Connection Machine CM5 had a Scalable Disk Array, SDA, the Connection Machine CM2 had the Data Vault and IBM SP had PIOFS and today it has GPFS, communication with those peripherals was architecture and operating system dependent.

See the MPI/IO site for more.

64 questions
5
votes
1 answer

Write several distributed arrays with MPI IO

I am rewriting a numerical simulation code that is parallelized using MPI in one direction. So far, the arrays containing the data were saved by the master MPI process, which implied transferring the data from all MPI processes to one and allocate…
4
votes
1 answer

MPI-IO: write subarray

I am starting to use MPI-IO and tried to write a very simple example of the things I'd like to do with it; however, even though it is a simple code and I took some inspiration from examples I read here and there, I get a segmentation fault I do not…
MBR
  • 794
  • 13
  • 34
3
votes
3 answers

Is it possible to write with several processors in the same file, at the end of the file, in an ordonated way?

I have 2 processors (this is an example), and I want these 2 processors to write in a file. I want them to write at the end of file, but not in a mixed pattern, like that : [file content] proc0 proc1 proc0 proc1 proc0 proc1 (and so on..) I'd like…
Nepho
  • 1,074
  • 1
  • 11
  • 32
2
votes
1 answer

Efficiency of Fortran stream access vs. MPI-IO

I have a parallel section of the code where I write out n large arrays (representing a numerical mesh) in blocks that are later read in different sized blocks. To do this I used Stream access so each processor writes their block independently, but…
Carlos
  • 33
  • 8
2
votes
2 answers

MPI-IO write to file in non contiguous pattern

I am having trouble in writing a parallel MPI I/O program that will write in a particular pattern. I was able to have process 0 write integers 0-9, process 1 write integers 10-19, process 2 write integers 20-29, etc. proc 0: [0, 1, 2, 3, 4, 5, 6, 7,…
Aeternus
  • 346
  • 4
  • 15
2
votes
1 answer

Parallel export of ASCII file on distributed file system

I need to export ASCII file on distributed file system. Currently I open file streams to the same file in append mode on each node. Then I export all data sequentially node by node. Will this solution work correctly on distributed file systems or is…
2
votes
1 answer

How to create an mpi_type_indexed with an unordered array of displacements

I have some data to write in specific position in a file. Each position is given to me in an array. At the moment I write them by writing each variable at the specific position with mpi_file_write_at. Positions are neither contiguous nor they are…
Ray
  • 787
  • 6
  • 9
2
votes
0 answers

Optimize writing to shared file with MPI

In my MPI program, I need to write the results of some computation to a single (shared) file, where each MPI process writes its portion of the data at different offsets. Simple enough. I have implemented it like: offset = rank * sizeof(double) *…
user3452579
  • 413
  • 4
  • 14
2
votes
1 answer

MPI Distributed reading over a non-standard type

I am trying to read a binary file containing a sequence of char and double. (For example 0 0.125 1 1.4 0 2.3 1 4.5, but written in a binary file). I created a simple struct input, and also an MPI Datatype I will call mpi_input corresponding to this…
waffle
  • 121
  • 8
2
votes
1 answer

Why these two MPI-IO code are not working the same way?

I am learning MPI-IO and following a tutorial (PDF download here). For one exercise, the correct code is: Program MPI_IOTEST Use MPI Implicit None Integer :: wsize,wrank Integer :: ierror Integer :: fh,offset Call MPI_Init(ierror) Call…
2
votes
1 answer

Using MPI-IO to write Fortran-formatted files

I am trying to save a solution using the OVERFLOW-PLOT3D q-file format (defined here: http://overflow.larc.nasa.gov/files/2014/06/Appendix_A.pdf). For a single grid, it is basically, READ(1) NGRID READ(1) JD,KD,LD,NQ,NQC READ(1)…
Touloudou
  • 2,079
  • 1
  • 17
  • 28
2
votes
1 answer

MPI locking for sqlite (python)

I am using mpi4py for a project I want to parallelize. Below is very basic pseudo code for my program: Load list of data from sqlite database Based on COMM.Rank and Comm.Size, select chunk of data to process Process data... use MPI.Gather to pass…
Harrison
  • 830
  • 1
  • 15
  • 28
2
votes
3 answers

MPI write to file sequentially

I am writing a parallel VTK file (pvti) from my fortran CFD solver. The file is really just a list of all the individual files for each piece of the data. Running MPI, if I have each process write the name of its individual file to standard…
weymouth
  • 521
  • 6
  • 17
2
votes
1 answer

How to use and interpret MPI-IO Error codes?

#include #include #include #include using namespace std; #define BUFSIZE 128 int main (int argc, char *argv[]) { int err; int rank; int size; double…
Max
  • 75
  • 1
  • 10
2
votes
1 answer

Segmentation fault while using MPI_File_open

I'm trying to read from a file for an MPI application. The cluster has 4 nodes with 12 cores in each node. I have tried running a basic program to compute rank and that works. When I added MPI_File_open it throws an exception at runtime BAD…
Apoorv
  • 85
  • 1
  • 7
1
2 3 4 5