The use of MPI-2 parallel IO, mpi-2 parallel io

Source: Internet
Author: User

The use of MPI-2 parallel IO, mpi-2 parallel io

The MPI program needs to use parallel IO to operate files, but du Niang does not find many methods to use parallel IO functions. Finally, I found some useful papers on zhiwang. After reading them, I felt very open.

MPI-1 operations on files are carried out by using the function call of the binding language, usually using the serial IO read/write mode, generally is to use a main process to open the file and read data, it is then distributed to other processes for processing. This type of serial IO data has a large amount of traffic and low efficiency. The MPI-2 implements parallel IO, allowing multiple processes to operate on files at the same time, thus avoiding the transfer of file data between different processes, for programs that require intensive file operations, it is a great benefit!

Parallel IO can be divided into three methods: Specify the explicit offset, independent file pointer, and shared file pointer. Each method can be divided into blocking and non-blocking situations.

The following uses reading and writing a binary array as an example to describe the function calls of these three methods. The following three mian functions correspond to three methods to read the "data" binary file array (File Content: number of lines, number of columns, array elements ), after reading the complete data, process 0 outputs the data it reads to verify whether the read is correct. Finally, all processes write the data it reads into a binary file named "data2. How can I view the content of a binary file to verify whether the read/write operations are correct, or how can I convert the binary file to a text file of readable content? The following is a simple code.

  

Code:

1 # include <stdio. h> 2 # include <stdlib. h> 3 # include <string. h> 4 # include "mpi. h "5 6 # define BLOCK_LOW (rank, size, n) (rank) * (n)/(size) 7 # define BLOCK_HIGH (rank, size, n) (BLOCK_LOW (rank) + 1, size, n)-1) 8 # define BLOCK_SIZE (rank, size, n) (BLOCK_HIGH (rank, size, n) -BLOCK_LOW (rank, size, n) + 1) 9 10 // parallel IO: Specifies the explicit offset file operation 11 int main (int argc, char * argv []) 12 {13 int size, rank, I; 14 int n, m; 15 float * array; 16 MPI_File fh; 17 MPI_Status status; 18 MPI_Init (& argc, & argv ); 19 bytes (MPI_COMM_WORLD, & rank); 20 MPI_Comm_size (MPI_COMM_WORLD, & size); 21 22 MPI_File_open (MPI_COMM_WORLD, "data", cursor, MPI_INFO_NULL, & fh ); 23 MPI_File_read_at_all (fh, 0, & n, 1, MPI_INT, & status); // read 24 MPI_File_read_at_all (fh, sizeof (int), & m, 1, MPI_INT, & status); // read 25 array = (float *) malloc (BLOCK_SIZE (rank, size, n) from the offset of 1 int) * m * sizeof (float); 26 MPI_File_read_at_all (fh, 2 * sizeof (int) + BLOCK_LOW (rank, size, n) * m * sizeof (float), array, BLOCK_SIZE (rank, size, n) * m, MPI_FLOAT, & status); 27 MPI_File_close (& fh); 28 29 if (rank = 0) {30 printf ("rank = % d: % d \ n", rank, n, m); 31 for (I = 0; I <BLOCK_SIZE (rank, size, n) * m; I ++) {32 printf ("%. 0f ", array [I]); 33 if (I + 1) % m = 0) putchar ('\ n '); 34} 35} 36 37 MPI_File_open (MPI_COMM_WORLD, "data2", MPI_MODE_CREATE | values, MPI_INFO_NULL, & fh); 38 values (fh, 0, & n, 1, MPI_INT, & status); 39 MPI_File_write_at_all (fh, sizeof (int), & m, 1, MPI_INT, & status); 40 MPI_File_write_at_all (fh, 2 * sizeof (int) + BLOCK_LOW (rank, size, n) * m * sizeof (float), array, BLOCK_SIZE (rank, size, n) * m, MPI_FLOAT, & status ); 41 MPI_File_close (& fh); 42 43 MPI_Finalize (); 44 return 0; 45} 46 47/* 48 // parallel IO: Independent file pointer 49 int main (int argc, char * argv []) 50 {51 int size, rank, I; 52 int n, m; 53 float * array; 54 MPI_File fh; 55 MPI_Status status; 56 MPI_Init (& argc, & argv); 57 MPI_Comm_rank (MPI_COMM_WORLD, & rank); 58 MPI_Comm_size (MPI_COMM_WORLD, & size); 59 60 degrees (MPI_COMM_WORLD, "data", comment, MPI_INFO_NULL, & fh); 61 MPI_File_set_view (fh, 0, MPI_INT, MPI_INT, "internal", MPI_INFO_NULL); // you can specify 0 62 MPI_File_read_all (fh, & n, 1, MPI_INT, & status); // after reading, the offset is automatically added with 1 63 MPI_File_read_all (fh, & m, 1, MPI_INT, & status); 64 array = (float *) malloc (BLOCK_SIZE (rank, size, n) * m * sizeof (float); 65 MPI_File_set_view (fh, 2 * sizeof (int) + BLOCK_LOW (rank, size, n) * m * sizeof (float), MPI_FLOAT, MPI_FLOAT, "internal", MPI_INFO_NULL); // reset the offset 66 MPI_File_read_all (fh, array, BLOCK_SIZE (rank, size, n) * m, MPI_INT, & status); 67 MPI_File_close (& fh); 68 69 if (rank = 0) {70 printf ("rank = % d: % d \ n ", rank, n, m); 71 for (I = 0; I <BLOCK_SIZE (rank, size, n) * m; I ++) {72 printf ("%. 0f ", array [I]); 73 if (I + 1) % m = 0) putchar ('\ n '); 74 75} 76} 77 78 MPI_File_open (MPI_COMM_WORLD, "data2", MPI_MODE_CREATE | values, MPI_INFO_NULL, & fh); 79 values (fh, 0, MPI_INT, MPI_INT, "internal ", MPI_INFO_NULL); 80 MPI_File_write_all (fh, & n, 1, MPI_INT, & status); 81 MPI_File_write_all (fh, & m, 1, MPI_INT, & status); 82 bytes (fh, 2 * sizeof (int) + BLOCK_LOW (rank, size, n) * m * sizeof (float), MPI_FLOAT, MPI_FLOAT, "internal", MPI_INFO_NULL); 83 MPI_File_write_all (fh, array, BLOCK_SIZE (rank, size, n) * m, MPI_FLOAT, & status); 84 MPI_File_close (& fh); 85 86 MPI_Finalize (); 87} 88 */89 90/* 91 // parallel IO: Shared File pointer 92 int main (int argc, char * argv []) 93 {94 int size, rank, i; 95 int n, m; 96 float * array; 97 MPI_File fh; 98 MPI_Status status; 99 MPI_Init (& argc, & argv); 100 MPI_Comm_rank (MPI_COMM_WORLD, & rank ); 101 MPI_Comm_size (MPI_COMM_WORLD, & size); 102 103 MPI_File_open (MPI_COMM_WORLD, "data", cursor, MPI_INFO_NULL, & fh); 104 percentile (fh, 0, & n, 1, MPI_INT, & status); // specify the explicit offset to read 105 MPI_File_read_at_all (fh, sizeof (int), & m, 1, MPI_INT, & status ); 106 array = (float *) malloc (BLOCK_SIZE (rank, size, n) * m * sizeof (float); 107 MPI_File_seek_shared (fh, 2 * sizeof (int ), MPI_SEEK_SET); // shared file pointer. The offset is two int108 MPI_File_read_ordered (fh, array, BLOCK_SIZE (rank, size, n) * m, MPI_FLOAT, & status ); // read 109 MPI_File_close (& fh); 110 111 if (rank = 0) {112 printf ("rank = % d: % d \ n ", rank, n, m); 113 for (I = 0; I <BLOCK_SIZE (rank, size, n) * m; I ++) {114 printf ("%. 0f ", array [I]); 115 if (I + 1) % m = 0) putchar ('\ n '); 116 117} 118} 119 120 MPI_File_open (MPI_COMM_WORLD, "data2", MPI_MODE_CREATE | empty, MPI_INFO_NULL, & fh); 121 bytes (fh, 0, & n, 1, MPI_INT, & status); // specify the explicit offset for writing 122 MPI_File_write_at_all (fh, sizeof (int), & m, 1, MPI_INT, & status); 123 MPI_File_seek_shared (fh, 2 * sizeof (int), MPI_SEEK_SET); // shared file pointer. The offset is two int124 MPI_File_write_ordered (fh, array, BLOCK_SIZE (rank, size, n) * m, MPI_FLOAT, & status); // write 125 MPI_File_close (& fh); 126 127 MPI_Finalize (); 128} 129 */

 

1 // convert a binary array file to a text file. Required binary file content: number of rows and columns array element 2 # include <stdio. h> 3 # include <stdlib. h> 4 5 typedef float type; 6 7 int main () 8 {9 int I, j; 10 int n, m; 11 type ** array; 12 FILE * fp; 13 fp = fopen ("data", "rb"); 14 fread (& n, sizeof (int), 1, fp); 15 fread (& m, sizeof (int), 1, fp); 16 17 array = (type **) malloc (n * sizeof (type *); 18 * array = (type *) malloc (n * m * sizeof (type); 19 for (I = 1; I <n; I ++) array [I] = array [I-1] + m; 20 21 fread (& array [0] [0], n * m * Sizeof (type), 1, fp); // note that the address cannot be array22 fclose (fp); 23 24 fp = fopen ("data.txt", "w "); 25 fprintf (fp, "% d \ n", n, m); 26 for (I = 0; I <n; I ++) {27 for (j = 0; j <m; j ++) 28 fprintf (fp, "% f", array [I] [j]); 29 putc ('\ n', fp); 30} 31 fprintf (stdout, "Successfully! \ N "); 32}

 

1 // convert an array text file to a binary file. Required text file content: number of rows and columns array element 2 # include <stdio. h> 3 # include <stdlib. h> 4 5 typedef float type; 6 7 int main () 8 {9 int I, j; 10 int n, m; 11 type ** array; 12 FILE * fp; 13 fp = fopen ("data.txt", "r"); 14 fscanf (fp, "% d", & n); 15 fscanf (fp, "% d ", & m); 16 17 array = (type **) malloc (n * sizeof (type *); 18 * array = (type *) malloc (n * m * sizeof (type); 19 for (I = 1; I <n; I ++) array [I] = array [I-1] + m; 20 21 for (I = 0; I <n; I ++) 22 for (j = 0; j <m; j ++) 2 3 fscanf (fp, "% f", & array [I] [j]); 24 fclose (fp); 25 26 fp = fopen ("data ", "wb"); 27 fwrite (& n, sizeof (int), 1, fp); 28 fwrite (& m, sizeof (int), 1, fp ); 29 fwrite (& array [0] [0], n * m * sizeof (type), 1, fp); // note that it cannot be the address & array30 fclose (fp ); 31 fprintf (stdout, "Successfully! \ N "); 32}

 

A batch of example documents data.txt:

8
1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000
2.000000 2.000000 2.000000 2.000000 2.000000 2.000000 2.000000
3.000000 3.000000 3.000000 3.000000 3.000000 3.000000 3.000000
4.000000 4.000000 4.000000 4.000000 4.000000 4.000000 4.000000
5.000000 5.000000 5.000000 5.000000 5.000000 5.000000 5.000000
6.000000 6.000000 6.000000 6.000000 6.000000 6.000000 6.000000
7.000000 7.000000 7.000000 7.000000 7.000000 7.000000 7.000000
8.000000 8.000000 8.000000 8.000000 8.000000 8.000000 8.000000

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.