C++ 通过 MPI 发送和接收二维数组

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/5901476/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-28 19:07:48  来源:igfitidea点击:

Sending and receiving 2D array over MPI

c++multidimensional-array2dmpi

提问by Ashmohan

The issue I am trying to resolve is the following:

我试图解决的问题如下:

The C++ serial code I have computes across a large 2D matrix. To optimize this process, I wish to split this large 2D matrix and run on 4 nodes (say) using MPI. The only communication that occurs between nodes is the sharing of edge values at the end of each time step. Every node shares the edge array data, A[i][j], with its neighbor.

我在大型二维矩阵上计算的 C++ 串行代码。为了优化这个过程,我希望分割这个大的 2D 矩阵并使用 MPI 在 4 个节点上运行(比如)。节点之间发生的唯一通信是在每个时间步长结束时共享边值。每个节点与其邻居共享边数组数据 A[i][j]。

Based on reading about MPI, I have the following scheme to be implemented.

基于对 MPI 的阅读,我有以下方案要实施。

if (myrank == 0)
{
 for (i= 0 to x)
 for (y= 0 to y)
 {
  C++ CODE IMPLEMENTATION 
  .... 
  MPI_SEND(A[x][0], A[x][1], A[x][2], Destination= 1.....)
  MPI_RECEIVE(B[0][0], B[0][1]......Sender = 1.....)
  MPI_BARRIER
}

if (myrank == 1)
{
for (i = x+1 to xx)
for (y = 0 to y)
{
 C++ CODE IMPLEMENTATION
 ....
 MPI_SEND(B[x][0], B[x][1], B[x][2], Destination= 0.....)
 MPI_RECEIVE(A[0][0], A[0][1]......Sender = 1.....)
 MPI BARRIER
}

I wanted to know if my approach is correct and also would appreciate any guidance on other MPI functions too look into for implementation.

我想知道我的方法是否正确,并且也很感激有关其他 MPI 函数的任何指导,也可以考虑实施。

Thanks, Ashwin.

谢谢,阿什温。

回答by Jonathan Dursi

Just to amplify Joel's points a bit:

只是为了放大乔尔的观点:

This goes much easier if you allocate your arrays so that they're contiguous (something C's "multidimensional arrays" don't give you automatically:)

如果您分配数组以使它们连续(C 的“多维数组”不会自动给您:)

int **alloc_2d_int(int rows, int cols) {
    int *data = (int *)malloc(rows*cols*sizeof(int));
    int **array= (int **)malloc(rows*sizeof(int*));
    for (int i=0; i<rows; i++)
        array[i] = &(data[cols*i]);

    return array;
}

/*...*/
int **A;
/*...*/
A = alloc_2d_init(N,M);

Then, you can do sends and recieves of the entire NxM array with

然后,您可以使用以下命令发送和接收整个 NxM 阵列

MPI_Send(&(A[0][0]), N*M, MPI_INT, destination, tag, MPI_COMM_WORLD);

and when you're done, free the memory with

完成后,释放内存

free(A[0]);
free(A);

Also, MPI_Recvis a blocking recieve, and MPI_Sendcan be a blocking send. One thing that means, as per Joel's point, is that you definately don't need Barriers. Further, it means that if you have a send/recieve pattern as above, you can get yourself into a deadlock situation -- everyone is sending, no one is recieving. Safer is:

此外,MPI_Recv是阻塞接收,也MPI_Send可以是阻塞发送。根据乔尔的观点,这意味着你绝对不需要屏障。此外,这意味着如果您有上述的发送/接收模式,您可能会陷入僵局——每个人都在发送,没有人在接收。更安全的是:

if (myrank == 0) {
   MPI_Send(&(A[0][0]), N*M, MPI_INT, 1, tagA, MPI_COMM_WORLD);
   MPI_Recv(&(B[0][0]), N*M, MPI_INT, 1, tagB, MPI_COMM_WORLD, &status);
} else if (myrank == 1) {
   MPI_Recv(&(A[0][0]), N*M, MPI_INT, 0, tagA, MPI_COMM_WORLD, &status);
   MPI_Send(&(B[0][0]), N*M, MPI_INT, 0, tagB, MPI_COMM_WORLD);
}

Another, more general, approach is to use MPI_Sendrecv:

另一种更通用的方法是使用MPI_Sendrecv

int *sendptr, *recvptr;
int neigh = MPI_PROC_NULL;

if (myrank == 0) {
   sendptr = &(A[0][0]);
   recvptr = &(B[0][0]);
   neigh = 1;
} else {
   sendptr = &(B[0][0]);
   recvptr = &(A[0][0]);
   neigh = 0;
}
MPI_Sendrecv(sendptr, N*M, MPI_INT, neigh, tagA, recvptr, N*M, MPI_INT, neigh, tagB, MPI_COMM_WORLD, &status);

or nonblocking sends and/or recieves.

或非阻塞发送和/或接收。

回答by Joel Falcou

First you don't need that much barrier Second, you should really send your data as a single block as multiple send/receive blocking their way will result in poor performances.

首先,您不需要那么多障碍。其次,您应该真正将数据作为单个块发送,因为多个发送/接收阻塞会导致性能不佳。