Cabana 0.8.0-dev
 
Loading...
Searching...
No Matches
Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim > Class Template Reference

#include <Cabana_Grid_SparseDimPartitioner.hpp>

Inheritance diagram for Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >:
Collaboration diagram for Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >:

Classes

struct  SubWorkloadFunctor
 functor to compute the sub workload in a given region (from the prefix sum) More...
 

Public Types

using memory_space = typename MemorySpace::memory_space
 Memory space.
 
using execution_space = typename memory_space::execution_space
 Default execution space.
 
using workload_view = Kokkos::View<int***, memory_space>
 Workload device view.
 
using partition_view = Kokkos::View<int* [num_space_dim], memory_space>
 Partition device view.
 
using workload_view_host
 Workload host view.
 
using partition_view_host
 Partition host view.
 

Public Member Functions

 SparseDimPartitioner (MPI_Comm comm, float max_workload_coeff, int workload_num, int num_step_rebalance, const std::array< int, num_space_dim > &global_cells_per_dim, int max_optimize_iteration=10)
 Constructor - automatically compute ranks_per_dim from MPI communicator.
 
 SparseDimPartitioner (MPI_Comm comm, float max_workload_coeff, int workload_num, int num_step_rebalance, const std::array< int, num_space_dim > &ranks_per_dim, const std::array< int, num_space_dim > &global_cells_per_dim, int max_optimize_iteration=10)
 Constructor - user-defined ranks_per_dim communicator.
 
std::array< int, num_space_dimranksPerDimension (MPI_Comm comm)
 Compute the number of MPI ranks in each dimension of the grid from the given MPI communicator.
 
std::array< int, num_space_dimranksPerDimension (MPI_Comm comm, const std::array< int, num_space_dim > &) const override
 Get the number of MPI ranks in each dimension of the grid from the given MPI communicator.
 
std::array< int, num_space_dimownedTilesPerDimension (MPI_Comm cart_comm) const
 Get the tile number in each dimension owned by the current MPI rank.
 
std::array< int, num_space_dimownedCellsPerDimension (MPI_Comm cart_comm) const
 Get the cell number in each dimension owned by the current MPI rank.
 
void ownedTileInfo (MPI_Comm cart_comm, std::array< int, num_space_dim > &owned_num_tile, std::array< int, num_space_dim > &global_tile_offset) const
 Get the owned number of tiles and the global tile offset of the current MPI rank.
 
void ownedCellInfo (MPI_Comm cart_comm, const std::array< int, num_space_dim > &, std::array< int, num_space_dim > &owned_num_cell, std::array< int, num_space_dim > &global_cell_offset) const override
 Get the owned number of cells and the global cell offset of the current MPI rank.
 
void initializeRecPartition (std::vector< int > &rec_partition_i, std::vector< int > &rec_partition_j, std::vector< int > &rec_partition_k)
 Initialize the tile partition; partition in each dimension has the form [0, p_1, ..., p_n, total_tile_num], so the partition would be [0, p_1), [p_1, p_2) ... [p_n, total_tile_num].
 
std::array< std::vector< int >, num_space_dimgetCurrentPartition ()
 Get the current partition. Copy partition from the device view to host std::array<vector>
 
void resetWorkload ()
 set all elements in _workload_per_tile and _workload_prefix_sum matrix to 0
 
template<class ParticlePosViewType, typename ArrayType, typename CellUnit>
void computeLocalWorkLoad (const ParticlePosViewType &view, int particle_num, const ArrayType &global_lower_corner, const CellUnit dx)
 compute the workload in the current MPI rank from particle positions (each particle count for 1 workload value)
 
template<class SparseMapType>
void computeLocalWorkLoad (const SparseMapType &sparseMap)
 compute the workload in the current MPI rank from sparseMap (the workload of a tile is 1 if the tile is occupied, 0 otherwise)
 
void computeFullPrefixSum (MPI_Comm comm)
 
  1. reduce the total workload in all MPI ranks; 2. compute the workload prefix sum matrix for all MPI ranks

 
template<class ParticlePosViewType, typename ArrayType, typename CellUnit>
int optimizePartition (const ParticlePosViewType &view, int particle_num, const ArrayType &global_lower_corner, const CellUnit dx, MPI_Comm comm)
 iteratively optimize the partition
 
template<class SparseMapType>
int optimizePartition (const SparseMapType &sparseMap, MPI_Comm comm)
 iteratively optimize the partition
 
void optimizePartition (bool &is_changed, int iter_seed)
 optimize the partition in three dimensions separately
 
int currentRankWorkload (MPI_Comm cart_comm)
 compute the total workload on the current MPI rank
 
template<typename PartitionViewHost, typename WorkloadViewHost>
int currentRankWorkload (MPI_Comm cart_comm, PartitionViewHost &rec_view, WorkloadViewHost &prefix_sum_view)
 compute the total workload on the current MPI rank
 
int averageRankWorkload ()
 compute the average workload on each MPI rank
 
template<typename WorkloadViewHost>
int averageRankWorkload (WorkloadViewHost &prefix_sum_view)
 compute the average workload on each MPI rank
 
float computeImbalanceFactor (MPI_Comm cart_comm)
 compute the imbalance factor for the current partition
 

Static Public Attributes

static constexpr std::size_t num_space_dim = NumSpaceDim
 dimension
 
static constexpr unsigned long long cell_bits_per_tile_dim
 Number of bits (per dimension) needed to index the cells inside a tile.
 
static constexpr unsigned long long cell_num_per_tile_dim
 
- Static Public Attributes inherited from Cabana::Grid::BlockPartitioner< 3 >
static constexpr std::size_t num_space_dim
 Spatial dimension.
 

Detailed Description

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
class Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >

Sparse mesh block partitioner. (Current Version: Support 3D only)

Template Parameters
MemorySpaceKokkos memory space.
CellPerTileDimCells per tile per dimension.
NumSpaceDimDimemsion (The current version support 3D only)

Member Typedef Documentation

◆ partition_view_host

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
using Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::partition_view_host
Initial value:
Kokkos::View<int* [num_space_dim],
typename execution_space::array_layout, Kokkos::HostSpace>
static constexpr std::size_t num_space_dim
dimension
Definition Cabana_Grid_SparseDimPartitioner.hpp:47

Partition host view.

◆ workload_view_host

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
using Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::workload_view_host
Initial value:
Kokkos::View<int***, typename execution_space::array_layout,
Kokkos::HostSpace>

Workload host view.

Constructor & Destructor Documentation

◆ SparseDimPartitioner() [1/2]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::SparseDimPartitioner ( MPI_Comm comm,
float max_workload_coeff,
int workload_num,
int num_step_rebalance,
const std::array< int, num_space_dim > & global_cells_per_dim,
int max_optimize_iteration = 10 )
inline

Constructor - automatically compute ranks_per_dim from MPI communicator.

Parameters
commMPI communicator to decide the rank nums in each dimension
max_workload_coeffthreshold factor for re-partition
workload_numtotal workload(particle/tile) number, used to compute workload_threshold
num_step_rebalancethe simulation step number after which one should check if repartition is needed
global_cells_per_dim3D array, global cells in each dimension
max_optimize_iterationmax iteration number to run the optimization

◆ SparseDimPartitioner() [2/2]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::SparseDimPartitioner ( MPI_Comm comm,
float max_workload_coeff,
int workload_num,
int num_step_rebalance,
const std::array< int, num_space_dim > & ranks_per_dim,
const std::array< int, num_space_dim > & global_cells_per_dim,
int max_optimize_iteration = 10 )
inline

Constructor - user-defined ranks_per_dim communicator.

Parameters
commMPI communicator to decide the rank nums in each dimension
max_workload_coeffthreshold factor for re-partition
workload_numtotal workload(particle/tile) number, used to compute workload_threshold
num_step_rebalancethe simulation step number after which one should check if repartition is needed
ranks_per_dim3D array, user-defined MPI rank constrains in per dimension
global_cells_per_dim3D array, global cells in each dimension
max_optimize_iterationmax iteration number to run the optimization

Member Function Documentation

◆ averageRankWorkload() [1/2]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
int Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::averageRankWorkload ( )
inline

compute the average workload on each MPI rank

Returns
average workload on each rank

◆ averageRankWorkload() [2/2]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
template<typename WorkloadViewHost>
int Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::averageRankWorkload ( WorkloadViewHost & prefix_sum_view)
inline

compute the average workload on each MPI rank

Parameters
prefix_sum_viewHost mirror of _workload_prefix_sum
Returns
average workload on each rank

◆ computeFullPrefixSum()

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
void Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::computeFullPrefixSum ( MPI_Comm comm)
inline

  1. reduce the total workload in all MPI ranks; 2. compute the workload prefix sum matrix for all MPI ranks

Parameters
commMPI communicator used for workload reduction

◆ computeImbalanceFactor()

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
float Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::computeImbalanceFactor ( MPI_Comm cart_comm)
inline

compute the imbalance factor for the current partition

Parameters
cart_commMPI cartesian communicator
Returns
the imbalance factor = workload on current rank / ave_workload

◆ computeLocalWorkLoad() [1/2]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
template<class ParticlePosViewType, typename ArrayType, typename CellUnit>
void Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::computeLocalWorkLoad ( const ParticlePosViewType & view,
int particle_num,
const ArrayType & global_lower_corner,
const CellUnit dx )
inline

compute the workload in the current MPI rank from particle positions (each particle count for 1 workload value)

Parameters
viewparticle positions view
particle_numtotal particle number
global_lower_cornerthe coordinate of the domain global lower corner
dxcell dx size

◆ computeLocalWorkLoad() [2/2]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
template<class SparseMapType>
void Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::computeLocalWorkLoad ( const SparseMapType & sparseMap)
inline

compute the workload in the current MPI rank from sparseMap (the workload of a tile is 1 if the tile is occupied, 0 otherwise)

Parameters
sparseMapsparseMap in the current rank

◆ currentRankWorkload() [1/2]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
int Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::currentRankWorkload ( MPI_Comm cart_comm)
inline

compute the total workload on the current MPI rank

Parameters
cart_commMPI cartesian communicator
Returns
total workload on current rank

◆ currentRankWorkload() [2/2]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
template<typename PartitionViewHost, typename WorkloadViewHost>
int Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::currentRankWorkload ( MPI_Comm cart_comm,
PartitionViewHost & rec_view,
WorkloadViewHost & prefix_sum_view )
inline

compute the total workload on the current MPI rank

Parameters
cart_commMPI cartesian communicator
rec_viewHost mirror of _rec_partition_dev
prefix_sum_viewHost mirror of _workload_prefix_sum
Returns
total workload on current rank

◆ initializeRecPartition()

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
void Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::initializeRecPartition ( std::vector< int > & rec_partition_i,
std::vector< int > & rec_partition_j,
std::vector< int > & rec_partition_k )
inline

Initialize the tile partition; partition in each dimension has the form [0, p_1, ..., p_n, total_tile_num], so the partition would be [0, p_1), [p_1, p_2) ... [p_n, total_tile_num].

Parameters
rec_partition_ipartition array in dimension i
rec_partition_jpartition array in dimension j
rec_partition_kpartition array in dimension k

◆ optimizePartition() [1/3]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
void Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::optimizePartition ( bool & is_changed,
int iter_seed )
inline

optimize the partition in three dimensions separately

Parameters
is_changedlabel if the partition is changed after the optimization
iter_seedseed number to choose the starting dimension of the optimization

◆ optimizePartition() [2/3]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
template<class ParticlePosViewType, typename ArrayType, typename CellUnit>
int Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::optimizePartition ( const ParticlePosViewType & view,
int particle_num,
const ArrayType & global_lower_corner,
const CellUnit dx,
MPI_Comm comm )
inline

iteratively optimize the partition

Parameters
viewparticle positions view
particle_numtotal particle number
global_lower_cornerthe coordinate of the domain global lower corner
dxcell dx size
commMPI communicator used for workload reduction
Returns
iteration number

◆ optimizePartition() [3/3]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
template<class SparseMapType>
int Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::optimizePartition ( const SparseMapType & sparseMap,
MPI_Comm comm )
inline

iteratively optimize the partition

Parameters
sparseMapsparseMap in the current rank
commMPI communicator used for workload reduction
Returns
iteration number

◆ ownedCellInfo()

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
void Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::ownedCellInfo ( MPI_Comm cart_comm,
const std::array< int, num_space_dim > & ,
std::array< int, num_space_dim > & owned_num_cell,
std::array< int, num_space_dim > & global_cell_offset ) const
inlineoverridevirtual

Get the owned number of cells and the global cell offset of the current MPI rank.

Parameters
cart_commThe MPI Cartesian communicator for the partitioning.
owned_num_cell(Return) The owned number of cells of the current MPI rank in each dimension.
global_cell_offset(Return) The global cell offset of the current MPI rank in each dimension

Implements Cabana::Grid::BlockPartitioner< 3 >.

◆ ownedCellsPerDimension()

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
std::array< int, num_space_dim > Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::ownedCellsPerDimension ( MPI_Comm cart_comm) const
inline

Get the cell number in each dimension owned by the current MPI rank.

Parameters
cart_commMPI cartesian communicator

◆ ownedTileInfo()

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
void Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::ownedTileInfo ( MPI_Comm cart_comm,
std::array< int, num_space_dim > & owned_num_tile,
std::array< int, num_space_dim > & global_tile_offset ) const
inline

Get the owned number of tiles and the global tile offset of the current MPI rank.

Parameters
cart_commThe MPI Cartesian communicator for the partitioning.
owned_num_tile(Return) The owned number of tiles of the current MPI rank in each dimension.
global_tile_offset(Return) The global tile offset of the current MPI rank in each dimension

◆ ownedTilesPerDimension()

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
std::array< int, num_space_dim > Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::ownedTilesPerDimension ( MPI_Comm cart_comm) const
inline

Get the tile number in each dimension owned by the current MPI rank.

Parameters
cart_commMPI cartesian communicator

◆ ranksPerDimension() [1/2]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
std::array< int, num_space_dim > Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::ranksPerDimension ( MPI_Comm comm)
inline

Compute the number of MPI ranks in each dimension of the grid from the given MPI communicator.

Parameters
commThe communicator to use for the partitioning

◆ ranksPerDimension() [2/2]

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
std::array< int, num_space_dim > Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::ranksPerDimension ( MPI_Comm comm,
const std::array< int, num_space_dim > &  ) const
inlineoverridevirtual

Get the number of MPI ranks in each dimension of the grid from the given MPI communicator.

Parameters
commThe communicator to use for the partitioning

Implements Cabana::Grid::BlockPartitioner< 3 >.

Member Data Documentation

◆ cell_bits_per_tile_dim

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
unsigned long long Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::cell_bits_per_tile_dim
staticconstexpr
Initial value:
=
bitCount( CellPerTileDim )
KOKKOS_INLINE_FUNCTION constexpr Integer bitCount(Integer input_int) noexcept
(Host/Device) Compute the lease bit number needed to index input integer
Definition Cabana_Grid_SparseIndexSpace.hpp:80

Number of bits (per dimension) needed to index the cells inside a tile.

◆ cell_num_per_tile_dim

template<typename MemorySpace, unsigned long long CellPerTileDim = 4, std::size_t NumSpaceDim = 3>
unsigned long long Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::cell_num_per_tile_dim
staticconstexpr
Initial value:
=
static constexpr unsigned long long cell_bits_per_tile_dim
Number of bits (per dimension) needed to index the cells inside a tile.
Definition Cabana_Grid_SparseDimPartitioner.hpp:76

Number of cells inside each tile (per dimension) Tile size reset to power of 2


The documentation for this class was generated from the following file: