|
Cabana 0.8.0-dev
|
#include <Cabana_Grid_SparseDimPartitioner.hpp>


Classes | |
| struct | SubWorkloadFunctor |
| functor to compute the sub workload in a given region (from the prefix sum) More... | |
Public Types | |
| using | memory_space = MemorySpace |
| Kokkos memory space. | |
| using | execution_space = typename memory_space::execution_space |
| Default execution space. | |
| using | workload_view = Kokkos::View<int***, memory_space> |
| Workload device view. | |
| using | partition_view = Kokkos::View<int* [num_space_dim], memory_space> |
| Partition device view. | |
| using | workload_view_host |
| Workload host view. | |
| using | partition_view_host |
| Partition host view. | |
Public Member Functions | |
| SparseDimPartitioner (MPI_Comm comm, float max_workload_coeff, int workload_num, int num_step_rebalance, const std::array< int, num_space_dim > &global_cells_per_dim, int max_optimize_iteration=10) | |
| Constructor - automatically compute ranks_per_dim from MPI communicator. | |
| SparseDimPartitioner (MPI_Comm comm, float max_workload_coeff, int workload_num, int num_step_rebalance, const std::array< int, num_space_dim > &ranks_per_dim, const std::array< int, num_space_dim > &global_cells_per_dim, int max_optimize_iteration=10) | |
| Constructor - user-defined ranks_per_dim communicator. | |
| std::array< int, num_space_dim > | ranksPerDimension (MPI_Comm comm) |
| Compute the number of MPI ranks in each dimension of the grid from the given MPI communicator. | |
| std::array< int, num_space_dim > | ranksPerDimension (MPI_Comm comm, const std::array< int, num_space_dim > &) const override |
| Get the number of MPI ranks in each dimension of the grid from the given MPI communicator. | |
| std::array< int, num_space_dim > | ownedTilesPerDimension (MPI_Comm cart_comm) const |
| Get the tile number in each dimension owned by the current MPI rank. | |
| std::array< int, num_space_dim > | ownedCellsPerDimension (MPI_Comm cart_comm) const |
| Get the cell number in each dimension owned by the current MPI rank. | |
| void | ownedTileInfo (MPI_Comm cart_comm, std::array< int, num_space_dim > &owned_num_tile, std::array< int, num_space_dim > &global_tile_offset) const |
| Get the owned number of tiles and the global tile offset of the current MPI rank. | |
| void | ownedCellInfo (MPI_Comm cart_comm, const std::array< int, num_space_dim > &, std::array< int, num_space_dim > &owned_num_cell, std::array< int, num_space_dim > &global_cell_offset) const override |
| Get the owned number of cells and the global cell offset of the current MPI rank. | |
| void | initializeRecPartition (std::vector< int > &rec_partition_i, std::vector< int > &rec_partition_j, std::vector< int > &rec_partition_k) |
| Initialize the tile partition; partition in each dimension has the form [0, p_1, ..., p_n, total_tile_num], so the partition would be [0, p_1), [p_1, p_2) ... [p_n, total_tile_num]. | |
| std::array< std::vector< int >, num_space_dim > | getCurrentPartition () |
| Get the current partition. Copy partition from the device view to host std::array<vector> | |
| void | resetWorkload () |
| set all elements in _workload_per_tile and _workload_prefix_sum matrix to 0 | |
| template<class ParticlePosViewType, typename ArrayType, typename CellUnit> | |
| void | computeLocalWorkLoad (const ParticlePosViewType &view, int particle_num, const ArrayType &global_lower_corner, const CellUnit dx) |
| compute the workload in the current MPI rank from particle positions (each particle count for 1 workload value) | |
| template<class SparseMapType> | |
| void | computeLocalWorkLoad (const SparseMapType &sparseMap) |
| compute the workload in the current MPI rank from sparseMap (the workload of a tile is 1 if the tile is occupied, 0 otherwise) | |
| void | computeFullPrefixSum (MPI_Comm comm) |
| |
| template<class ParticlePosViewType, typename ArrayType, typename CellUnit> | |
| int | optimizePartition (const ParticlePosViewType &view, int particle_num, const ArrayType &global_lower_corner, const CellUnit dx, MPI_Comm comm) |
| iteratively optimize the partition | |
| template<class SparseMapType> | |
| int | optimizePartition (const SparseMapType &sparseMap, MPI_Comm comm) |
| iteratively optimize the partition | |
| void | optimizePartition (bool &is_changed, int iter_seed) |
| optimize the partition in three dimensions separately | |
| int | currentRankWorkload (MPI_Comm cart_comm) |
| compute the total workload on the current MPI rank | |
| template<typename PartitionViewHost, typename WorkloadViewHost> | |
| int | currentRankWorkload (MPI_Comm cart_comm, PartitionViewHost &rec_view, WorkloadViewHost &prefix_sum_view) |
| compute the total workload on the current MPI rank | |
| int | averageRankWorkload () |
| compute the average workload on each MPI rank | |
| template<typename WorkloadViewHost> | |
| int | averageRankWorkload (WorkloadViewHost &prefix_sum_view) |
| compute the average workload on each MPI rank | |
| float | computeImbalanceFactor (MPI_Comm cart_comm) |
| compute the imbalance factor for the current partition | |
Static Public Attributes | |
| static constexpr std::size_t | num_space_dim = NumSpaceDim |
| dimension | |
| static constexpr unsigned long long | cell_bits_per_tile_dim |
| Number of bits (per dimension) needed to index the cells inside a tile. | |
| static constexpr unsigned long long | cell_num_per_tile_dim |
| Static Public Attributes inherited from Cabana::Grid::BlockPartitioner< 3 > | |
| static constexpr std::size_t | num_space_dim |
| Spatial dimension. | |
Sparse mesh block partitioner. (Current Version: Support 3D only)
| MemorySpace | Kokkos memory space. |
| CellPerTileDim | Cells per tile per dimension. |
| NumSpaceDim | Dimemsion (The current version support 3D only) |
| using Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::partition_view_host |
Partition host view.
| using Cabana::Grid::SparseDimPartitioner< MemorySpace, CellPerTileDim, NumSpaceDim >::workload_view_host |
Workload host view.
|
inline |
Constructor - automatically compute ranks_per_dim from MPI communicator.
| comm | MPI communicator to decide the rank nums in each dimension |
| max_workload_coeff | threshold factor for re-partition |
| workload_num | total workload(particle/tile) number, used to compute workload_threshold |
| num_step_rebalance | the simulation step number after which one should check if repartition is needed |
| global_cells_per_dim | 3D array, global cells in each dimension |
| max_optimize_iteration | max iteration number to run the optimization |
|
inline |
Constructor - user-defined ranks_per_dim communicator.
| comm | MPI communicator to decide the rank nums in each dimension |
| max_workload_coeff | threshold factor for re-partition |
| workload_num | total workload(particle/tile) number, used to compute workload_threshold |
| num_step_rebalance | the simulation step number after which one should check if repartition is needed |
| ranks_per_dim | 3D array, user-defined MPI rank constrains in per dimension |
| global_cells_per_dim | 3D array, global cells in each dimension |
| max_optimize_iteration | max iteration number to run the optimization |
|
inline |
compute the average workload on each MPI rank
|
inline |
compute the average workload on each MPI rank
| prefix_sum_view | Host mirror of _workload_prefix_sum |
|
inline |
| comm | MPI communicator used for workload reduction |
|
inline |
compute the imbalance factor for the current partition
| cart_comm | MPI cartesian communicator |
|
inline |
compute the workload in the current MPI rank from particle positions (each particle count for 1 workload value)
| view | particle positions view |
| particle_num | total particle number |
| global_lower_corner | the coordinate of the domain global lower corner |
| dx | cell dx size |
|
inline |
compute the workload in the current MPI rank from sparseMap (the workload of a tile is 1 if the tile is occupied, 0 otherwise)
| sparseMap | sparseMap in the current rank |
|
inline |
compute the total workload on the current MPI rank
| cart_comm | MPI cartesian communicator |
|
inline |
compute the total workload on the current MPI rank
| cart_comm | MPI cartesian communicator |
| rec_view | Host mirror of _rec_partition_dev |
| prefix_sum_view | Host mirror of _workload_prefix_sum |
|
inline |
Initialize the tile partition; partition in each dimension has the form [0, p_1, ..., p_n, total_tile_num], so the partition would be [0, p_1), [p_1, p_2) ... [p_n, total_tile_num].
| rec_partition_i | partition array in dimension i |
| rec_partition_j | partition array in dimension j |
| rec_partition_k | partition array in dimension k |
|
inline |
optimize the partition in three dimensions separately
| is_changed | label if the partition is changed after the optimization |
| iter_seed | seed number to choose the starting dimension of the optimization |
|
inline |
iteratively optimize the partition
| view | particle positions view |
| particle_num | total particle number |
| global_lower_corner | the coordinate of the domain global lower corner |
| dx | cell dx size |
| comm | MPI communicator used for workload reduction |
|
inline |
iteratively optimize the partition
| sparseMap | sparseMap in the current rank |
| comm | MPI communicator used for workload reduction |
|
inlineoverridevirtual |
Get the owned number of cells and the global cell offset of the current MPI rank.
| cart_comm | The MPI Cartesian communicator for the partitioning. |
| owned_num_cell | (Return) The owned number of cells of the current MPI rank in each dimension. |
| global_cell_offset | (Return) The global cell offset of the current MPI rank in each dimension |
Implements Cabana::Grid::BlockPartitioner< 3 >.
|
inline |
Get the cell number in each dimension owned by the current MPI rank.
| cart_comm | MPI cartesian communicator |
|
inline |
Get the owned number of tiles and the global tile offset of the current MPI rank.
| cart_comm | The MPI Cartesian communicator for the partitioning. |
| owned_num_tile | (Return) The owned number of tiles of the current MPI rank in each dimension. |
| global_tile_offset | (Return) The global tile offset of the current MPI rank in each dimension |
|
inline |
Get the tile number in each dimension owned by the current MPI rank.
| cart_comm | MPI cartesian communicator |
|
inline |
Compute the number of MPI ranks in each dimension of the grid from the given MPI communicator.
| comm | The communicator to use for the partitioning |
|
inlineoverridevirtual |
Get the number of MPI ranks in each dimension of the grid from the given MPI communicator.
| comm | The communicator to use for the partitioning |
Implements Cabana::Grid::BlockPartitioner< 3 >.
|
staticconstexpr |
Number of bits (per dimension) needed to index the cells inside a tile.
|
staticconstexpr |
Number of cells inside each tile (per dimension) Tile size reset to power of 2