Communication plan base class. More...
#include <Cabana_CommunicationPlan.hpp>
Public Member Functions | |
CommunicationPlan (MPI_Comm comm) | |
Constructor. | |
MPI_Comm | comm () const |
Get the MPI communicator. | |
int | numNeighbor () const |
Get the number of neighbor ranks that this rank will communicate with. | |
int | neighborRank (const int neighbor) const |
Given a local neighbor id get its rank in the MPI communicator. | |
std::size_t | numExport (const int neighbor) const |
Get the number of elements this rank will export to a given neighbor. | |
std::size_t | totalNumExport () const |
Get the total number of exports this rank will do. | |
std::size_t | numImport (const int neighbor) const |
Get the number of elements this rank will import from a given neighbor. | |
std::size_t | totalNumImport () const |
Get the total number of imports this rank will do. | |
std::size_t | exportSize () const |
Get the number of export elements. | |
Kokkos::View< std::size_t *, memory_space > | getExportSteering () const |
Get the steering vector for the exports. | |
template<class ExecutionSpace, class RankViewType> | |
Kokkos::View< size_type *, memory_space > | createWithTopology (ExecutionSpace exec_space, Export, const RankViewType &element_export_ranks, const std::vector< int > &neighbor_ranks) |
Neighbor and export rank creator. Use this when you already know which ranks neighbor each other (i.e. every rank already knows who they will be sending and receiving from) as it will be more efficient. In this case you already know the topology of the point-to-point communication but not how much data to send to and receive from the neighbors. | |
template<class RankViewType> | |
Kokkos::View< size_type *, memory_space > | createWithTopology (Export, const RankViewType &element_export_ranks, const std::vector< int > &neighbor_ranks) |
Neighbor and export rank creator. Use this when you already know which ranks neighbor each other (i.e. every rank already knows who they will be sending and receiving from) as it will be more efficient. In this case you already know the topology of the point-to-point communication but not how much data to send to and receive from the neighbors. | |
template<class ExecutionSpace, class RankViewType> | |
Kokkos::View< size_type *, memory_space > | createWithoutTopology (ExecutionSpace exec_space, Export, const RankViewType &element_export_ranks) |
Export rank creator. Use this when you don't know who you will receiving from - only who you are sending to. This is less efficient than if we already knew who our neighbors were because we have to determine the topology of the point-to-point communication first. | |
template<class RankViewType> | |
Kokkos::View< size_type *, memory_space > | createWithoutTopology (Export, const RankViewType &element_export_ranks) |
Export rank creator. Use this when you don't know who you will receiving from - only who you are sending to. This is less efficient than if we already knew who our neighbors were because we have to determine the topology of the point-to-point communication first. | |
template<class ExecutionSpace, class RankViewType, class IdViewType> | |
auto | createWithTopology (ExecutionSpace exec_space, Import, const RankViewType &element_import_ranks, const IdViewType &element_import_ids, const std::vector< int > &neighbor_ranks) -> std::tuple< Kokkos::View< typename RankViewType::size_type *, typename RankViewType::memory_space >, Kokkos::View< int *, typename RankViewType::memory_space >, Kokkos::View< int *, typename IdViewType::memory_space > > |
Neighbor and import rank creator. Use this when you already know which ranks neighbor each other (i.e. every rank already knows who they will be sending and receiving from) as it will be more efficient. In this case you already know the topology of the point-to-point communication but not how much data to send to and receive from the neighbors. | |
template<class RankViewType, class IdViewType> | |
auto | createWithTopology (Import, const RankViewType &element_import_ranks, const IdViewType &element_import_ids, const std::vector< int > &neighbor_ranks) |
Neighbor and import rank creator. Use this when you already know which ranks neighbor each other (i.e. every rank already knows who they will be sending and receiving from) as it will be more efficient. In this case you already know the topology of the point-to-point communication but not how much data to send to and receive from the neighbors. | |
template<class ExecutionSpace, class RankViewType, class IdViewType> | |
auto | createWithoutTopology (ExecutionSpace exec_space, Import, const RankViewType &element_import_ranks, const IdViewType &element_import_ids) -> std::tuple< Kokkos::View< typename RankViewType::size_type *, typename RankViewType::memory_space >, Kokkos::View< int *, typename RankViewType::memory_space >, Kokkos::View< int *, typename IdViewType::memory_space > > |
Import rank creator. Use this when you don't know who you will be receiving from - only who you are importing from. This is less efficient than if we already knew who our neighbors were because we have to determine the topology of the point-to-point communication first. | |
template<class RankViewType, class IdViewType> | |
auto | createWithoutTopology (Import, const RankViewType &element_import_ranks, const IdViewType &element_import_ids) |
Import rank creator. Use this when you don't know who you will be receiving from - only who you are importing from. This is less efficient than if we already knew who our neighbors were because we have to determine the topology of the point-to-point communication first. | |
template<class PackViewType, class RankViewType> | |
void | createExportSteering (const PackViewType &neighbor_ids, const RankViewType &element_export_ranks) |
Create the export steering vector. | |
template<class PackViewType, class RankViewType, class IdViewType> | |
void | createExportSteering (const PackViewType &neighbor_ids, const RankViewType &element_export_ranks, const IdViewType &element_export_ids) |
Create the export steering vector. | |
Communication plan base class.
DeviceType | Device type for which the data for this class will be allocated and where parallel execution will occur. |
The communication plan computes how to redistribute elements in a parallel data structure using MPI. Given a list of data elements on the local MPI rank and their destination ranks, the communication plan computes which rank each process is sending and receiving from and how many elements we will send and receive. In addition, it provides an export steering vector which describes how to pack the local data to be exported into contiguous send buffers for each destination rank (in the forward communication plan).
Some nomenclature:
Export - elements we are sending in the forward communication plan.
Import - elements we are receiving in the forward communication plan.
|
inline |
Constructor.
comm | The MPI communicator over which the distributor is defined. |
|
inline |
Create the export steering vector.
Creates an array describing which export element ids are moved to which location in the send buffer of the communication plan. Ordered such that if a rank sends to itself then those values come first.
neighbor_ids | The id of each element in the neighbor send buffers. |
element_export_ranks | The ranks to which we are exporting each element. We use this to build the steering vector. The input is expected to be a Kokkos view or Cabana slice in the same memory space as the communication plan. |
|
inline |
Create the export steering vector.
Creates an array describing which export element ids are moved to which location in the contiguous send buffer of the communication plan. Ordered such that if a rank sends to itself then those values come first.
neighbor_ids | The id of each element in the neighbor send buffers. |
element_export_ranks | The ranks to which we are exporting each element. We use this to build the steering vector. The input is expected to be a Kokkos view or Cabana slice in the same memory space as the communication plan. |
element_export_ids | The local ids of the elements to be exported. This corresponds with the export ranks vector and must be the same length if defined. The input is expected to be a Kokkos view or Cabana slice in the same memory space as the communication plan. |
|
inline |
Export rank creator. Use this when you don't know who you will receiving from - only who you are sending to. This is less efficient than if we already knew who our neighbors were because we have to determine the topology of the point-to-point communication first.
exec_space | Kokkos execution space. |
element_export_ranks | The destination rank in the target decomposition of each locally owned element in the source decomposition. Each element will have one unique destination to which it will be exported. This export rank may any one of the listed neighbor ranks which can include the calling rank. An export rank of -1 will signal that this element is not to be exported and will be ignored in the data migration. The input is expected to be a Kokkos view or Cabana slice in the same memory space as the communication plan. |
|
inline |
Import rank creator. Use this when you don't know who you will be receiving from - only who you are importing from. This is less efficient than if we already knew who our neighbors were because we have to determine the topology of the point-to-point communication first.
exec_space | Kokkos execution space. |
element_import_ranks | The source rank in the target decomposition of each remotely owned element in element_import_ids. This import rank may be any one of the listed neighbor ranks which can include the calling rank. The input is expected to be a Kokkos view in the same memory space as the communication plan. |
element_import_ids | The local IDs of remotely owned elements that are to be imported. These are local IDs on the remote rank. element_import_ids is mapped such that element_import_ids(i) lives on remote rank element_import_ranks(i). |
|
inline |
Export rank creator. Use this when you don't know who you will receiving from - only who you are sending to. This is less efficient than if we already knew who our neighbors were because we have to determine the topology of the point-to-point communication first.
element_export_ranks | The destination rank in the target decomposition of each locally owned element in the source decomposition. Each element will have one unique destination to which it will be exported. This export rank may any one of the listed neighbor ranks which can include the calling rank. An export rank of -1 will signal that this element is not to be exported and will be ignored in the data migration. The input is expected to be a Kokkos view or Cabana slice in the same memory space as the communication plan. |
|
inline |
Import rank creator. Use this when you don't know who you will be receiving from - only who you are importing from. This is less efficient than if we already knew who our neighbors were because we have to determine the topology of the point-to-point communication first.
element_import_ranks | The source rank in the target decomposition of each remotely owned element in element_import_ids. This import rank may be any one of the listed neighbor ranks which can include the calling rank. The input is expected to be a Kokkos view in the same memory space as the communication plan. |
element_import_ids | The local IDs of remotely owned elements that are to be imported. These are local IDs on the remote rank. element_import_ids is mapped such that element_import_ids(i) lives on remote rank element_import_ranks(i). |
|
inline |
Neighbor and export rank creator. Use this when you already know which ranks neighbor each other (i.e. every rank already knows who they will be sending and receiving from) as it will be more efficient. In this case you already know the topology of the point-to-point communication but not how much data to send to and receive from the neighbors.
exec_space | Kokkos execution space. |
element_export_ranks | The destination rank in the target decomposition of each locally owned element in the source decomposition. Each element will have one unique destination to which it will be exported. This export rank may be any one of the listed neighbor ranks which can include the calling rank. An export rank of -1 will signal that this element is not to be exported and will be ignored in the data migration. The input is expected to be a Kokkos view or Cabana slice in the same memory space as the communication plan. |
neighbor_ranks | List of ranks this rank will send to and receive from. This list can include the calling rank. This is effectively a description of the topology of the point-to-point communication plan. Only the unique elements in this list are used. |
|
inline |
Neighbor and import rank creator. Use this when you already know which ranks neighbor each other (i.e. every rank already knows who they will be sending and receiving from) as it will be more efficient. In this case you already know the topology of the point-to-point communication but not how much data to send to and receive from the neighbors.
exec_space | Kokkos execution space. |
element_import_ranks | The source rank in the target decomposition of each remotely owned element in element_import_ids. This import rank may be any one of the listed neighbor ranks which can include the calling rank. The input is expected to be a Kokkos view in the same memory space as the communication plan. |
element_import_ids | The local IDs of remotely owned elements that are to be imported. These are local IDs on the remote rank. element_import_ids is mapped such that element_import_ids(i) lives on remote rank element_import_ranks(i). |
neighbor_ranks | List of ranks this rank will send to and receive from. This list can include the calling rank. This is effectively a description of the topology of the point-to-point communication plan. Only the unique elements in this list are used. |
|
inline |
Neighbor and export rank creator. Use this when you already know which ranks neighbor each other (i.e. every rank already knows who they will be sending and receiving from) as it will be more efficient. In this case you already know the topology of the point-to-point communication but not how much data to send to and receive from the neighbors.
element_export_ranks | The destination rank in the target decomposition of each locally owned element in the source decomposition. Each element will have one unique destination to which it will be exported. This export rank may be any one of the listed neighbor ranks which can include the calling rank. An export rank of -1 will signal that this element is not to be exported and will be ignored in the data migration. The input is expected to be a Kokkos view or Cabana slice in the same memory space as the communication plan. |
neighbor_ranks | List of ranks this rank will send to and receive from. This list can include the calling rank. This is effectively a description of the topology of the point-to-point communication plan. Only the unique elements in this list are used. |
|
inline |
Neighbor and import rank creator. Use this when you already know which ranks neighbor each other (i.e. every rank already knows who they will be sending and receiving from) as it will be more efficient. In this case you already know the topology of the point-to-point communication but not how much data to send to and receive from the neighbors.
element_import_ranks | The source rank in the target decomposition of each remotely owned element in element_import_ids. This import rank may be any one of the listed neighbor ranks which can include the calling rank. The input is expected to be a Kokkos view in the same memory space as the communication plan. |
element_import_ids | The local IDs of remotely owned elements that are to be imported. These are local IDs on the remote rank. element_import_ids is mapped such that element_import_ids(i) lives on remote rank element_import_ranks(i). |
neighbor_ranks | List of ranks this rank will send to and receive from. This list can include the calling rank. This is effectively a description of the topology of the point-to-point communication plan. Only the unique elements in this list are used. |
|
inline |
Get the number of export elements.
Whenever the communication plan is applied, this is the total number of elements expected to be input on the sending ranks (in the forward communication plan). This will be different than the number returned by totalNumExport() if some of the export ranks used in the construction are -1 and therefore will not particpate in an export operation.
|
inline |
Get the steering vector for the exports.
The steering vector places exports in contiguous chunks by destination rank. The chunks are in consecutive order based on the local neighbor id (i.e. all elements going to neighbor with local id 0 first, then all elements going to neighbor with local id 1, etc.).
|
inline |
Given a local neighbor id get its rank in the MPI communicator.
neighbor | The local id of the neighbor to get the rank for. |
|
inline |
Get the number of elements this rank will export to a given neighbor.
neighbor | The local id of the neighbor to get the number of exports for. |
|
inline |
Get the number of elements this rank will import from a given neighbor.
neighbor | The local id of the neighbor to get the number of imports for. |
|
inline |
Get the number of neighbor ranks that this rank will communicate with.
|
inline |
Get the total number of exports this rank will do.
|
inline |
Get the total number of imports this rank will do.