The ROme OpTimistic Simulator  3.0.0
A General-Purpose Multithreaded Parallel/Distributed Simulation Platform
mpi.h File Reference

MPI Support Module. More...

#include <lp/msg.h>
+ Include dependency graph for mpi.h:
+ This graph shows which files directly or indirectly include this file:

Go to the source code of this file.

Enumerations

enum  msg_ctrl_tag { MSG_CTRL_GVT_START = 1, MSG_CTRL_GVT_DONE, MSG_CTRL_TERMINATION }
 A control message MPI tag value. More...
 

Functions

void mpi_global_init (int *argc_p, char ***argv_p)
 Initializes the MPI environment. More...
 
void mpi_global_fini (void)
 Finalizes the MPI environment.
 
void mpi_remote_msg_send (struct lp_msg *msg, nid_t dest_nid)
 Sends a model message to a LP residing on another node. More...
 
void mpi_remote_anti_msg_send (struct lp_msg *msg, nid_t dest_nid)
 Sends a model anti-message to a LP residing on another node. More...
 
void mpi_control_msg_broadcast (enum msg_ctrl_tag ctrl)
 Sends a platform control message to all the other nodes. More...
 
void mpi_control_msg_send_to (enum msg_ctrl_tag ctrl, nid_t dest)
 Sends a platform control message to a specific nodes. More...
 
void mpi_remote_msg_handle (void)
 Empties the queue of incoming MPI messages, doing the right thing for each one of them. More...
 
void mpi_reduce_sum_scatter (const unsigned node_vals[n_nodes], unsigned *result)
 Computes the sum-reduction-scatter operation across all nodes. More...
 
bool mpi_reduce_sum_scatter_done (void)
 Checks if a previous mpi_reduce_sum_scatter() operation has completed. More...
 
void mpi_reduce_min (simtime_t *node_min_p)
 Computes the min-reduction operation across all nodes. More...
 
bool mpi_reduce_min_done (void)
 Checks if a previous mpi_reduce_min() operation has completed. More...
 
void mpi_node_barrier (void)
 A node barrier.
 
void mpi_blocking_data_send (const void *data, int data_size, nid_t dest)
 Sends a byte buffer to another node. More...
 
void * mpi_blocking_data_rcv (int *data_size_p, nid_t src)
 Receives a byte buffer from another node. More...
 

Detailed Description

MPI Support Module.

This module implements all basic MPI facilities to let the distributed execution of a simulation model take place consistently.

Several facilities are thread-safe, others are not. Check carefully which of these can be used by worker threads without coordination when relying on this module.

Definition in file mpi.h.

Enumeration Type Documentation

◆ msg_ctrl_tag

A control message MPI tag value.

Enumerator
MSG_CTRL_GVT_START 

Used by the master to start a new gvt reduction operation.

MSG_CTRL_GVT_DONE 

Used by slaves to signal their completion of the gvt protocol.

MSG_CTRL_TERMINATION 

Used in broadcast to signal that local LPs can terminate.

Definition at line 23 of file mpi.h.

Function Documentation

◆ mpi_blocking_data_rcv()

void* mpi_blocking_data_rcv ( int *  data_size_p,
nid_t  src 
)

Receives a byte buffer from another node.

Parameters
data_size_pwhere to write the size of the received data
srcthe id of the sender node
Returns
the buffer allocated with mm_alloc() containing the received data

This operation blocks the execution flow until the sender node actually sends the data with mpi_raw_data_blocking_send().

Definition at line 397 of file mpi.c.

◆ mpi_blocking_data_send()

void mpi_blocking_data_send ( const void *  data,
int  data_size,
nid_t  dest 
)

Sends a byte buffer to another node.

Parameters
dataa pointer to the buffer to send
data_sizethe buffer size
destthe id of the destination node

This operation blocks the execution flow until the destination node receives the data with mpi_raw_data_blocking_rcv().

Definition at line 381 of file mpi.c.

◆ mpi_control_msg_broadcast()

void mpi_control_msg_broadcast ( enum msg_ctrl_tag  ctrl)

Sends a platform control message to all the other nodes.

Parameters
ctrlthe control message to send

Definition at line 180 of file mpi.c.

◆ mpi_control_msg_send_to()

void mpi_control_msg_send_to ( enum msg_ctrl_tag  ctrl,
nid_t  dest 
)

Sends a platform control message to a specific nodes.

Parameters
ctrlthe control message to send
destthe id of the destination node

Definition at line 199 of file mpi.c.

◆ mpi_global_init()

void mpi_global_init ( int *  argc_p,
char ***  argv_p 
)

Initializes the MPI environment.

Parameters
argc_pa pointer to the OS supplied argc
argv_pa pointer to the OS supplied argv

Definition at line 77 of file mpi.c.

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ mpi_reduce_min()

void mpi_reduce_min ( simtime_t node_min_p)

Computes the min-reduction operation across all nodes.

Parameters
node_min_pa pointer to the value from the calling node which will also be used to store the computed minimum.

Each node supplies a single simtime_t value. The minimum of all these values is computed and stored in node_min_p itself. It is expected that only a single thread calls this function at a time. Each node has to call this function else the result can't be computed. It is possible to have a single mpi_reduce_min() operation pending at a time. Both arguments must point to valid memory regions until mpi_reduce_min_done() returns true.

Definition at line 339 of file mpi.c.

◆ mpi_reduce_min_done()

bool mpi_reduce_min_done ( void  )

Checks if a previous mpi_reduce_min() operation has completed.

Returns
true if the previous operation has been completed, false otherwise.

Definition at line 353 of file mpi.c.

◆ mpi_reduce_sum_scatter()

void mpi_reduce_sum_scatter ( const unsigned  node_vals[n_nodes],
unsigned *  result 
)

Computes the sum-reduction-scatter operation across all nodes.

Parameters
node_valsa pointer to the addendum vector from the calling node.
resulta pointer where the nid-th component of the sum will be stored.

Each node supplies a n_nodes components vector. The sum of all these vector is computed and the nid-th component of this vector is stored in result. It is expected that only a single thread calls this function at a time. Each node has to call this function else the result can't be computed. It is possible to have a single mpi_reduce_sum_scatter() operation pending at a time. Both arguments must point to valid memory regions until mpi_reduce_sum_scatter_done() returns true.

Definition at line 303 of file mpi.c.

◆ mpi_reduce_sum_scatter_done()

bool mpi_reduce_sum_scatter_done ( void  )

Checks if a previous mpi_reduce_sum_scatter() operation has completed.

Returns
true if the previous operation has been completed, false otherwise.

Definition at line 315 of file mpi.c.

◆ mpi_remote_anti_msg_send()

void mpi_remote_anti_msg_send ( struct lp_msg msg,
nid_t  dest_nid 
)

Sends a model anti-message to a LP residing on another node.

Parameters
msgthe message to rollback
dest_nidthe id of the node where the targeted LP resides

This function also calls the relevant handlers in order to keep, for example, the non blocking gvt algorithm running. Note that when this function returns, the anti-message may have not been sent yet. We don't need to actively check for sending completion: the platform, during the fossil collection, leverages the gvt to make sure the message has been indeed sent and processed before freeing it.

Definition at line 163 of file mpi.c.

◆ mpi_remote_msg_handle()

void mpi_remote_msg_handle ( void  )

Empties the queue of incoming MPI messages, doing the right thing for each one of them.

This routine checks, using the MPI probing mechanism, for new remote messages and it handles them accordingly. Control messages are handled by the respective platform handler. Simulation messages are unpacked and put in the queue. Anti-messages are matched and accordingly processed by the message map.

Definition at line 218 of file mpi.c.

◆ mpi_remote_msg_send()

void mpi_remote_msg_send ( struct lp_msg msg,
nid_t  dest_nid 
)

Sends a model message to a LP residing on another node.

Parameters
msgthe message to send
dest_nidthe id of the node where the targeted LP resides

This function also calls the relevant handlers in order to keep, for example, the non blocking gvt algorithm running. Note that when this function returns, the message may have not been actually sent. We don't need to actively check for sending completion: the platform, during the fossil collection, leverages the gvt to make sure the message has been indeed sent and processed before freeing it.

Definition at line 138 of file mpi.c.