我正在研究一种模拟波浪能转换器的工具,我需要将两个软件包相互耦合.一个程序用Fortran编写,另一个用C++编写.我需要在每个时间步骤将Fortran程序中的信息发送到C++程序.但是,在将数据发送到C++程序之前,首先需要在Python中处理数据.我收到了一条使用MPI在程序之间传输数据的提示.
我现在正在尝试从Fortran代码向Python发送一个简单的字符串,但Python代码卡在receive命令中.
我的Fortran代码如下所示:
USE GlobalVariables USE MPI IMPLICIT NONE CHARACTER(LEN=10):: astring INTEGER :: comm, rank, size, mpierr ! Initialize MPI on first timestep IF(tstep .LT. 2) THEN call MPI_INIT(mpierr) ENDIF ! make string to send to python astring = "TEST" ! MPI Test call MPI_Comm_size(MPI_COMM_WORLD, size, mpierr) call MPI_Comm_rank(MPI_COMM_WORLD, rank, mpierr) ! Send message to python CALL MPI_SEND(astring, len(astring), MPI_CHARACTER, 0, 22, MPI_COMM_WORLD, mpierr) print *, 'MPI MESSAGE SENT ', mpierr ! Initialize MPI on first timestep IF(tstep .EQ. Nsteps-1) THEN call MPI_FINALIZE(mpierr) print *, 'MPI FINALIZED!' ENDIF
我的Python代码如下:
from mpi4py import MPI import numpy as np import subprocess as sp import os # Start OW3D_SPH in the background and send MPI message os.chdir('OW3D_run') args = ['OceanWave3D_SPH','OW3D.inp'] pid = sp.Popen(args,shell=False) os.chdir('..') # Check if MPI is initialized comm = MPI.COMM_WORLD rank = comm.Get_rank() # Receive message from fortran test = comm.recv(source=0, tag=22) # Let the program end output = pid.communicate() with open('test.txt','w') as f: f.write(test)
Python代码永远不会超过MPI接收命令而无法完成.Fortran代码完成并正确打印"MPI FINALIZED"消息.
我没有看到我做错了什么,消息从进程0发送到进程0,带有标记22并MPI_COMM_WORLD
在两个代码中使用.
如果要在同一个MPI作业中启动Fortran程序和Python程序,则必须使用以下内容:
mpiexec -n 1 fortran_program : -n 1 python main.py
Fortran程序将成为MPI等级0,Python程序将成为MPI等级1.您还可以启动每个可执行文件中的多个,例如:
mpiexec -n 2 fortran_program : -n 4 python main.py
排名0和1将来自Fortran程序,排名2到5 - 来自Python的一个.
还要注意的是comm.recv()
在mpi4py与小写字母(启动其他通信方式comm.send()
,comm.irecv()
等)使用泡椒引擎盖下,实际上与序列化的Python对象进行操作.这与Fortran代码发送的字符数组不兼容.你必须要使用以大写字母(启动通信方式comm.Send()
,comm.Recv()
等等)上与NumPy阵列操作和接收显式类型信息.不幸的是,我的Python fu很弱,我现在无法提供完整的工作示例,但MPI部分应该是这样的(未经验证的代码):
# Create an MPI status object status = MPI.Status() # Wait for a message without receiving it comm.Probe(source=0, tag=22, status=status) # Check the length of the message nchars = status.Get_count(MPI.CHARACTER) # Allocate a big enough data array of characters data = np.empty(nchars, dtype='S') # Receive the message comm.Recv([data, MPI.CHARACTER], source=0, tag=22) # Construct somehow the string out of the individual chars in "data"
在Fortran代码中,您必须指定目标等级1(在运行一个Fortran可执行文件和一个Python可执行文件的情况下).
当两者都是不同的程序时,你当然不能同时拥有源0和目标0.你说"从进程0到进程0",但你显然有两个不同的进程!其中一个有不同的等级编号,但你没有显示你的实际mpirun
命令,所以很难说哪一个是哪个.
澄清一下:MPI_COM_WORLD是在你的mpirun或同等程序中执行的所有进程的通信器.你必须留下简单的思维图片,第一个Python进程是0级,第一个Fortran进程是0级,第一个C++是0级...
如果你这样做
mpirun -n 1 python main.py : -n 1 ./fortran_main : -n 1 ./c++_main
然后在MPI_COMM_WORLD中,Python程序将排名为0,Fortran进程将排名为1,C++将排名为2.您可以创建仅限于Python子集或Fortran子集或C++的通信器,您将获得排名每个都为0,但这将在不同的通信器中编号,而不是在MPI_COMM_WORLD中.
MPI进程可以使用函数来生成进程MPI_Comm_spawn()
。在python程序中,此函数是通信器的方法:comm.Spawn()
。有关示例,请参见mpi4py教程。产生的进程是根据可执行文件运行的,该可执行文件可以是另一个python程序,ac / c ++ / fortran程序或任何您想要的程序。 然后,可以合并一个内部通信器,以在主进程和生成的进程之间定义内部通信器,如在mpi4py中执行的:在生成的进程之间进行通信结果,主进程和生成的进程可以不受限制地自由通信。
让我们介绍一个Python / c示例。Python代码产生该过程并接收一个字符:
from mpi4py import MPI
import sys
import numpy
'''
slavec is an executable built starting from slave.c
'''
# Spawing a process running an executable
# sub_comm is an MPI intercommunicator
sub_comm = MPI.COMM_SELF.Spawn('slavec', args=[], maxprocs=1)
# common_comm is an intracommunicator accross the python process and the spawned process. All kind sof collective communication (Bcast...) are now possible between the python process and the c process
common_comm=sub_comm.Merge(False)
#print 'parent in common_comm ', common_comm.Get_rank(), ' of ',common_comm.Get_size()
data = numpy.arange(1, dtype='int8')
common_comm.Recv([data, MPI.CHAR], source=1, tag=0)
print "Python received message from C:",data
# disconnecting the shared communicators is required to finalize the spawned process.
common_comm.Disconnect()
sub_comm.Disconnect()
编译的C代码mpicc slave.c -o slavec -Wall
使用合并的通信器发送字符:
#include
#include
#include
#include
int main(int argc,char *argv[])
{
int rank,size;
MPI_Comm parentcomm,intracomm;
MPI_Init( &argc, &argv );
//MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_get_parent( &parentcomm );
if (parentcomm == MPI_COMM_NULL){fprintf(stderr,"module1 : i'm supposed to be the spawned process!");exit(1);}
MPI_Intercomm_merge(parentcomm,1,&intracomm);
MPI_Comm_size(intracomm, &size);
MPI_Comm_rank(intracomm, &rank);
//printf("child had rank %d in communicator of size %d\n",rank,size);
char s= 42;
printf("sending message %d from C\n",s);
MPI_Send(&s,1,MPI_CHAR,0,0,intracomm);
MPI_Comm_disconnect(&intracomm); //disconnect after all communications
MPI_Comm_disconnect(&parentcomm);
MPI_Finalize();
return 0;
}
让我们从C ++代码中接收一个字符,然后将一个整数发送给fortran程序:
'''
slavecpp is an executable built starting from slave.cpp
'''
# Spawing a process running an executable
# sub_comm is an MPI intercommunicator
sub_comm = MPI.COMM_SELF.Spawn('slavecpp', args=[], maxprocs=1)
# common_comm is an intracommunicator accross the python process and the spawned process. All kind sof collective communication (Bcast...) are now possible between the python process and the c process
common_comm=sub_comm.Merge(False)
#print 'parent in common_comm ', common_comm.Get_rank(), ' of ',common_comm.Get_size()
data = numpy.arange(1, dtype='int8')
common_comm.Recv([data, MPI.CHAR], source=1, tag=0)
print "Python received message from C++:",data
# disconnecting the shared communicators is required to finalize the spawned process.
common_comm.Disconnect()
sub_comm.Disconnect()
'''
slavef90 is an executable built starting from slave.cpp
'''
# Spawing a process running an executable
# sub_comm is an MPI intercommunicator
sub_comm = MPI.COMM_SELF.Spawn('slavef90', args=[], maxprocs=1)
# common_comm is an intracommunicator accross the python process and the spawned process. All kind sof collective communication (Bcast...) are now possible between the python process and the c process
common_comm=sub_comm.Merge(False)
#print 'parent in common_comm ', common_comm.Get_rank(), ' of ',common_comm.Get_size()
data = numpy.arange(1, dtype='int32')
data[0]=42
print "Python sending message to fortran:",data
common_comm.Send([data, MPI.INT], dest=1, tag=0)
print "Python over"
# disconnecting the shared communicators is required to finalize the spawned process.
common_comm.Disconnect()
sub_comm.Disconnect()
所编译的C ++程序与C程序mpiCC slave.cpp -o slavecpp -Wall
非常接近:
#include
#include
#include
using namespace std;
int main(int argc,char *argv[])
{
int rank,size;
MPI_Comm parentcomm,intracomm;
MPI_Init( &argc, &argv );
//MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_get_parent( &parentcomm );
if (parentcomm == MPI_COMM_NULL){fprintf(stderr,"module1 : i'm supposed to be the spawned process!");exit(1);}
MPI_Intercomm_merge(parentcomm,1,&intracomm);
MPI_Comm_size(intracomm, &size);
MPI_Comm_rank(intracomm, &rank);
//cout<<"child had rank "<
最后,由编译的Fortran程序将mpif90 slave.f90 -o slavef90 -Wall
接收整数:
program test
!
implicit none
!
include 'mpif.h'
!
integer :: ierr,s(1),stat(MPI_STATUS_SIZE)
integer :: parentcomm,intracomm
!
call MPI_INIT(ierr)
call MPI_COMM_GET_PARENT(parentcomm, ierr)
call MPI_INTERCOMM_MERGE(parentcomm, 1, intracomm, ierr)
call MPI_RECV(s, 1, MPI_INTEGER, 0, 0, intracomm,stat, ierr)
print*, 'fortran program received: ', s
call MPI_COMM_DISCONNECT(intracomm, ierr)
call MPI_COMM_DISCONNECT(parentcomm, ierr)
call MPI_FINALIZE(ierr)
endprogram test
通过对通信器进行更多的工作,“ C ++进程”可以将消息直接发送到“ fortran进程”,而无需在通信中涉及主进程。
最后,以这种方式混合语言似乎很容易,但从长远来看可能不是一个好的解决方案。确实,您可能会遇到与性能相关的问题,或者维护系统可能会变得很困难(三种语言...)。对于C ++部分,Cython和F2PY可能是有价值的替代方案。毕竟,Python有点像胶水 ……