我目前正在研究一些图形理论问题的MPI代码,其中许多节点都可以包含答案和答案的长度.为了让所有东西都回到主节点,我正在做一个MPI_Gather来获得答案,并且我正在尝试使用MPI_MINLOC操作来确定谁拥有最短的解决方案.现在我存储长度和节点ID的数据类型定义为(按照http://www.open-mpi.org/doc/v1.4/man3/MPI_Reduce.3.php等众多网站上显示的示例):
struct minType { float len; int index; };
在每个节点上,我正在以下列方式初始化此结构的本地副本:
int commRank; MPI_Comm_rank (MPI_COMM_WORLD, &commRank); minType solutionLen; solutionLen.len = 1e37; solutionLen.index = commRank;
在执行结束时,我有一个MPI_Gather调用,成功地将所有解决方案(我从内存中打印出来以验证它们)以及调用:
MPI_Reduce (&solutionLen, &solutionLen, 1, MPI_FLOAT_INT, MPI_MINLOC, 0, MPI_COMM_WORLD);
我的理解是这些论点应该是:
数据源
是结果的目标(仅在指定的根节点上有效)
每个节点发送的项目数
数据类型(MPI_FLOAT_INT似乎是根据上面的链接定义的)
操作(MPI_MINLOC似乎也被定义)
指定通信组中的根节点ID
要等待的通信组.
当我的代码进入reduce操作时,我收到此错误:
[compute-2-19.local:9754] *** An error occurred in MPI_Reduce [compute-2-19.local:9754] *** on communicator MPI_COMM_WORLD [compute-2-19.local:9754] *** MPI_ERR_ARG: invalid argument of some other kind [compute-2-19.local:9754] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) -------------------------------------------------------------------------- mpirun has exited due to process rank 0 with PID 9754 on node compute-2-19.local exiting improperly. There are two reasons this could occur: 1. this process did not call "init" before exiting, but others in the job did. This can cause a job to hang indefinitely while it waits for all processes to call "init". By rule, if one process calls "init", then ALL processes must call "init" prior to termination. 2. this process called "init", but exited without calling "finalize". By rule, all processes that call "init" MUST call "finalize" prior to exiting or it will be considered an "abnormal termination" This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). --------------------------------------------------------------------------
我承认完全被这个难过了.如果重要的话我正在使用基于CentOS 5.5的Rocks集群上的OpenMPI 1.5.3(使用gcc 4.4构建)进行编译.