Mesh Oriented datABase  (version 5.5.0)
An array-based unstructured mesh library
moab::ParCommGraph Class Reference

#include <ParCommGraph.hpp>

+ Collaboration diagram for moab::ParCommGraph:

Public Types

enum  TypeGraph { INITIAL_MIGRATE , COVERAGE , DOF_BASED }
 

Public Member Functions

virtual ~ParCommGraph ()
 
 ParCommGraph (MPI_Comm joincomm, MPI_Group group1, MPI_Group group2, int coid1, int coid2)
 collective constructor, will be called on all sender tasks and receiver tasks More...
 
 ParCommGraph (const ParCommGraph &)
 copy constructor will copy only the senders, receivers, compid1, etc More...
 
ErrorCode compute_trivial_partition (std::vector< int > &numElemsPerTaskInGroup1)
 Based on the number of elements on each task in group 1, partition for group 2, trivially. More...
 
ErrorCode pack_receivers_graph (std::vector< int > &packed_recv_array)
 pack information about receivers view of the graph, for future sending to receiver root More...
 
bool is_root_sender ()
 
bool is_root_receiver ()
 
int sender (int index)
 
int receiver (int index)
 
int get_component_id1 ()
 
int get_component_id2 ()
 
int get_context_id ()
 
void set_context_id (int other_id)
 
EntityHandle get_cover_set ()
 
void set_cover_set (EntityHandle cover)
 
ErrorCode split_owned_range (int sender_rank, Range &owned)
 
ErrorCode split_owned_range (Range &owned)
 
ErrorCode send_graph (MPI_Comm jcomm)
 
ErrorCode send_graph_partition (ParallelComm *pco, MPI_Comm jcomm)
 
ErrorCode send_mesh_parts (MPI_Comm jcomm, ParallelComm *pco, Range &owned)
 
ErrorCode receive_comm_graph (MPI_Comm jcomm, ParallelComm *pco, std::vector< int > &pack_array)
 
ErrorCode receive_mesh (MPI_Comm jcomm, ParallelComm *pco, EntityHandle local_set, std::vector< int > &senders_local)
 
ErrorCode release_send_buffers ()
 
ErrorCode send_tag_values (MPI_Comm jcomm, ParallelComm *pco, Range &owned, std::vector< Tag > &tag_handles)
 
ErrorCode receive_tag_values (MPI_Comm jcomm, ParallelComm *pco, Range &owned, std::vector< Tag > &tag_handles)
 
const std::vector< int > & senders ()
 
const std::vector< int > & receivers ()
 
ErrorCode settle_send_graph (TupleList &TLcovIDs)
 
void SetReceivingAfterCoverage (std::map< int, std::set< int > > &idsFromProcs)
 
void settle_comm_by_ids (int comp, TupleList &TLBackToComp, std::vector< int > &valuesComp)
 
ErrorCode set_split_ranges (int comp, TupleList &TLBackToComp1, std::vector< int > &valuesComp1, int lenTag, Range &ents_of_interest, int type)
 
ErrorCode form_tuples_to_migrate_mesh (Interface *mb, TupleList &TLv, TupleList &TLc, int type, int lenTagType1)
 
ErrorCode form_mesh_from_tuples (Interface *mb, TupleList &TLv, TupleList &TLc, int type, int lenTagType1, EntityHandle fset, Range &primary_ents, std::vector< int > &values_entities)
 
ErrorCode compute_partition (ParallelComm *pco, Range &owned, int met)
 
ErrorCode dump_comm_information (std::string prefix, int is_send)
 

Private Member Functions

void find_group_ranks (MPI_Group group, MPI_Comm join, std::vector< int > &ranks)
 find ranks of a group with respect to an encompassing communicator More...
 

Private Attributes

MPI_Comm comm
 
std::vector< int > senderTasks
 
std::vector< int > receiverTasks
 
bool rootSender
 
bool rootReceiver
 
int rankInGroup1
 
int rankInGroup2
 
int rankInJoin
 
int joinSize
 
int compid1
 
int compid2
 
int context_id
 
EntityHandle cover_set
 
std::map< int, std::vector< int > > recv_graph
 
std::map< int, std::vector< int > > recv_sizes
 
std::map< int, std::vector< int > > sender_graph
 
std::map< int, std::vector< int > > sender_sizes
 
std::vector< ParallelComm::Buffer * > localSendBuffs
 
int * comm_graph
 
std::map< int, Rangesplit_ranges
 
std::vector< MPI_Request > sendReqs
 
std::vector< int > corr_tasks
 
std::vector< int > corr_sizes
 
TypeGraph graph_type
 
std::map< int, std::vector< int > > involved_IDs_map
 
std::map< int, std::vector< int > > map_index
 
std::map< int, std::vector< int > > map_ptr
 

Detailed Description

Definition at line 55 of file ParCommGraph.hpp.

Member Enumeration Documentation

◆ TypeGraph

Enumerator
INITIAL_MIGRATE 
COVERAGE 
DOF_BASED 

Definition at line 58 of file ParCommGraph.hpp.

59  {
61  COVERAGE,
62  DOF_BASED
63  };

Constructor & Destructor Documentation

◆ ~ParCommGraph()

moab::ParCommGraph::~ParCommGraph ( )
virtual

Definition at line 73 of file ParCommGraph.cpp.

74 {
75  // TODO Auto-generated destructor stub
76 }

◆ ParCommGraph() [1/2]

moab::ParCommGraph::ParCommGraph ( MPI_Comm  joincomm,
MPI_Group  group1,
MPI_Group  group2,
int  coid1,
int  coid2 
)

collective constructor, will be called on all sender tasks and receiver tasks

Parameters
[in]joincommjoint MPI communicator that covers both sender and receiver MPI groups
[in]group1MPI group formed with sender tasks; (sender usually loads the mesh in a migrate scenario)
[in]group2MPI group formed with receiver tasks; (receiver usually receives the mesh in a migrate scenario)
[in]coid1sender component unique identifier in a coupled application (in climate simulations could be the Atmosphere Comp id = 5 or Ocean Comp ID , 17)
[in]coid2receiver component unique identifier in a coupled application (it is usually the coupler, 2, in E3SM)

this graph will be formed on sender and receiver tasks, and, in principle, will hold info about how the local entities are distributed on the other side

Its major role is to help in migration of data, from component to the coupler and vice-versa; Each local entity has a corresponding task (or tasks) on the other side, to where the data needs to be sent

important data stored in ParCommGraph, immediately it is created

  • all sender and receiver tasks ids, with respect to the joint communicator
  • local rank in sender and receiver group (-1 if not part of the respective group)
  • rank in the joint communicator (from 0)

Definition at line 20 of file ParCommGraph.cpp.

21  : comm( joincomm ), compid1( coid1 ), compid2( coid2 )
22 {
23  // find out the tasks from each group, in the joint communicator
24  find_group_ranks( group1, comm, senderTasks );
26 
27  rootSender = rootReceiver = false;
28  rankInGroup1 = rankInGroup2 = rankInJoin = -1; // not initialized, or not part of the group
29 
30  int mpierr = MPI_Group_rank( group1, &rankInGroup1 );
31  if( MPI_SUCCESS != mpierr || rankInGroup1 == MPI_UNDEFINED ) rankInGroup1 = -1;
32 
33  mpierr = MPI_Group_rank( group2, &rankInGroup2 );
34  if( MPI_SUCCESS != mpierr || rankInGroup2 == MPI_UNDEFINED ) rankInGroup2 = -1;
35 
36  mpierr = MPI_Comm_rank( comm, &rankInJoin );
37  if( MPI_SUCCESS != mpierr ) // it should be a fatal error
38  rankInJoin = -1;
39 
40  mpierr = MPI_Comm_size( comm, &joinSize );
41  if( MPI_SUCCESS != mpierr ) // it should be a fatal error
42  joinSize = -1;
43 
44  if( 0 == rankInGroup1 ) rootSender = true;
45  if( 0 == rankInGroup2 ) rootReceiver = true;
47  comm_graph = NULL;
48  context_id = -1;
49  cover_set = 0; // refers to nothing yet
50 }

References comm, comm_graph, context_id, cover_set, find_group_ranks(), graph_type, INITIAL_MIGRATE, joinSize, rankInGroup1, rankInGroup2, rankInJoin, receiverTasks, rootReceiver, rootSender, and senderTasks.

◆ ParCommGraph() [2/2]

moab::ParCommGraph::ParCommGraph ( const ParCommGraph src)

copy constructor will copy only the senders, receivers, compid1, etc

Definition at line 53 of file ParCommGraph.cpp.

54 {
55  comm = src.comm;
56  senderTasks = src.senderTasks; // these are the sender tasks in joint comm
57  receiverTasks = src.receiverTasks; // these are all the receiver tasks in joint comm
58  rootSender = src.rootSender;
59  rootReceiver = src.rootReceiver;
60  rankInGroup1 = src.rankInGroup1;
61  rankInGroup2 = src.rankInGroup2; // group 1 is sender, 2 is receiver
62  rankInJoin = src.rankInJoin;
63  joinSize = src.joinSize;
64  compid1 = src.compid1;
65  compid2 = src.compid2;
66  comm_graph = NULL;
67  graph_type = src.graph_type;
68  context_id = src.context_id;
69  cover_set = src.cover_set;
70  return;
71 }

References comm, comm_graph, compid1, compid2, context_id, cover_set, graph_type, joinSize, rankInGroup1, rankInGroup2, rankInJoin, receiverTasks, rootReceiver, rootSender, and senderTasks.

Member Function Documentation

◆ compute_partition()

ErrorCode moab::ParCommGraph::compute_partition ( ParallelComm pco,
Range owned,
int  met 
)

Definition at line 1129 of file ParCommGraph.cpp.

1130 {
1131  // we are on a task on sender, and need to compute a new partition;
1132  // primary cells need to be distributed to nb receivers tasks
1133  // first, we will use graph partitioner, with zoltan;
1134  // in the graph that we need to build, the first layer of ghosts is needed;
1135  // can we avoid that ? For example, we can find out from each boundary edge/face what is the
1136  // other cell (on the other side), then form the global graph, and call zoltan in parallel met 1
1137  // would be a geometric partitioner, and met 2 would be a graph partitioner for method 1 we do
1138  // not need any ghost exchange
1139 
1140  // find first edges that are shared
1141  if( owned.empty() )
1142  return MB_SUCCESS; // nothing to do? empty partition is not allowed, maybe we should return
1143  // error?
1144  Core* mb = (Core*)pco->get_moab();
1145 
1146  double t1, t2, t3;
1147  t1 = MPI_Wtime();
1148  int primaryDim = mb->dimension_from_handle( *owned.rbegin() );
1149  int interfaceDim = primaryDim - 1; // should be 1 or 2
1150  Range sharedEdges;
1151  ErrorCode rval;
1152 
1153  std::vector< int > shprocs( MAX_SHARING_PROCS );
1154  std::vector< EntityHandle > shhandles( MAX_SHARING_PROCS );
1155 
1156  Tag gidTag = mb->globalId_tag();
1157  int np;
1158  unsigned char pstatus;
1159 
1160  std::multimap< int, int > extraGraphEdges;
1161  // std::map<int, int> adjCellsId;
1162  std::map< int, int > extraCellsProc;
1163  // if method is 2, no need to do the exchange for adjacent cells across partition boundary
1164  // these maps above will be empty for method 2 (geometry)
1165  if( 1 == met )
1166  {
1167  rval = pco->get_shared_entities( /*int other_proc*/ -1, sharedEdges, interfaceDim,
1168  /*const bool iface*/ true );MB_CHK_ERR( rval );
1169 
1170 #ifdef VERBOSE
1171  std::cout << " on sender task " << pco->rank() << " number of shared interface cells " << sharedEdges.size()
1172  << "\n";
1173 #endif
1174  // find to what processors we need to send the ghost info about the edge
1175  // first determine the local graph; what elements are adjacent to each cell in owned range
1176  // cells that are sharing a partition interface edge, are identified first, and form a map
1177  TupleList TLe; // tuple list for cells
1178  TLe.initialize( 2, 0, 1, 0, sharedEdges.size() ); // send to, id of adj cell, remote edge
1179  TLe.enableWriteAccess();
1180 
1181  std::map< EntityHandle, int > edgeToCell; // from local boundary edge to adjacent cell id
1182  // will be changed after
1183  for( Range::iterator eit = sharedEdges.begin(); eit != sharedEdges.end(); eit++ )
1184  {
1185  EntityHandle edge = *eit;
1186  // get the adjacent cell
1187  Range adjEnts;
1188  rval = mb->get_adjacencies( &edge, 1, primaryDim, false, adjEnts );MB_CHK_ERR( rval );
1189  if( adjEnts.size() > 0 )
1190  {
1191  EntityHandle adjCell = adjEnts[0];
1192  int gid;
1193  rval = mb->tag_get_data( gidTag, &adjCell, 1, &gid );MB_CHK_ERR( rval );
1194  rval = pco->get_sharing_data( edge, &shprocs[0], &shhandles[0], pstatus, np );MB_CHK_ERR( rval );
1195  int n = TLe.get_n();
1196  TLe.vi_wr[2 * n] = shprocs[0];
1197  TLe.vi_wr[2 * n + 1] = gid;
1198  TLe.vul_wr[n] = shhandles[0]; // the remote edge corresponding to shared edge
1199  edgeToCell[edge] = gid; // store the map between edge and local id of adj cell
1200  TLe.inc_n();
1201  }
1202  }
1203 
1204 #ifdef VERBOSE
1205  std::stringstream ff2;
1206  ff2 << "TLe_" << pco->rank() << ".txt";
1207  TLe.print_to_file( ff2.str().c_str() );
1208 #endif
1209  // send the data to the other processors:
1210  ( pco->proc_config().crystal_router() )->gs_transfer( 1, TLe, 0 );
1211  // on receiver side, each local edge will have the remote cell adjacent to it!
1212 
1213  int ne = TLe.get_n();
1214  for( int i = 0; i < ne; i++ )
1215  {
1216  int sharedProc = TLe.vi_rd[2 * i]; // this info is coming from here, originally
1217  int remoteCellID = TLe.vi_rd[2 * i + 1]; // this is the id of the remote cell, on sharedProc
1218  EntityHandle localCell = TLe.vul_rd[i]; // this is now local edge/face on this proc
1219  int localCellId = edgeToCell[localCell]; // this is the local cell adjacent to edge/face
1220  // now, we will need to add to the graph the pair <localCellId, remoteCellID>
1221  std::pair< int, int > extraAdj = std::make_pair( localCellId, remoteCellID );
1222  extraGraphEdges.insert( extraAdj );
1223  // adjCellsId [edgeToCell[localCell]] = remoteCellID;
1224  extraCellsProc[remoteCellID] = sharedProc;
1225 #ifdef VERBOSE
1226  std::cout << "local ID " << edgeToCell[localCell] << " remote cell ID: " << remoteCellID << "\n";
1227 #endif
1228  }
1229  }
1230  t2 = MPI_Wtime();
1231  if( rootSender ) std::cout << " time preparing the input for Zoltan:" << t2 - t1 << " seconds. \n";
1232  // so adj cells ids; need to call zoltan for parallel partition
1233 #ifdef MOAB_HAVE_ZOLTAN
1234  ZoltanPartitioner* mbZTool = new ZoltanPartitioner( mb, pco );
1235  if( 1 <= met ) // partition in zoltan, either graph or geometric partitioner
1236  {
1237  std::map< int, Range > distribution; // how to distribute owned elements by processors in receiving groups
1238  // in how many tasks do we want to be distributed?
1239  int numNewPartitions = (int)receiverTasks.size();
1240  Range primaryCells = owned.subset_by_dimension( primaryDim );
1241  rval = mbZTool->partition_owned_cells( primaryCells, extraGraphEdges, extraCellsProc, numNewPartitions,
1242  distribution, met );MB_CHK_ERR( rval );
1243  for( std::map< int, Range >::iterator mit = distribution.begin(); mit != distribution.end(); mit++ )
1244  {
1245  int part_index = mit->first;
1246  assert( part_index < numNewPartitions );
1247  split_ranges[receiverTasks[part_index]] = mit->second;
1248  }
1249  }
1250  // delete now the partitioner
1251  delete mbZTool;
1252 #endif
1253  t3 = MPI_Wtime();
1254  if( rootSender ) std::cout << " time spent by Zoltan " << t3 - t2 << " seconds. \n";
1255  return MB_SUCCESS;
1256 }

References moab::Range::begin(), moab::ProcConfig::crystal_router(), moab::Range::empty(), moab::TupleList::enableWriteAccess(), moab::Range::end(), ErrorCode, moab::ParallelComm::get_moab(), moab::TupleList::get_n(), moab::ParallelComm::get_shared_entities(), moab::ParallelComm::get_sharing_data(), moab::TupleList::inc_n(), moab::TupleList::initialize(), MAX_SHARING_PROCS, mb, MB_CHK_ERR, MB_SUCCESS, ZoltanPartitioner::partition_owned_cells(), moab::TupleList::print_to_file(), moab::ParallelComm::proc_config(), moab::ParallelComm::rank(), moab::Range::rbegin(), receiverTasks, rootSender, moab::Range::size(), split_ranges, moab::Range::subset_by_dimension(), moab::TupleList::vi_rd, moab::TupleList::vi_wr, moab::TupleList::vul_rd, and moab::TupleList::vul_wr.

◆ compute_trivial_partition()

ErrorCode moab::ParCommGraph::compute_trivial_partition ( std::vector< int > &  numElemsPerTaskInGroup1)

Based on the number of elements on each task in group 1, partition for group 2, trivially.

Operations: it is called on every receiver task; decides how are all elements distributed

Note: establish how many elements are sent from each task in group 1 to tasks in group 2 This call is usually made on a root / master process, and will construct local maps that are member data, which contain the communication graph, in both directions Also, number of elements migrated/exchanged between each sender/receiver

Parameters
[in]numElemsPerTaskInGroup1(std::vector<int> &) number of elements on each sender task

Definition at line 100 of file ParCommGraph.cpp.

101 {
102 
103  recv_graph.clear();
104  recv_sizes.clear();
105  sender_graph.clear();
106  sender_sizes.clear();
107 
108  if( numElemsPerTaskInGroup1.size() != senderTasks.size() )
109  return MB_FAILURE; // each sender has a number of elements that it owns
110 
111  // first find out total number of elements to be sent from all senders
112  int total_elems = 0;
113  std::vector< int > accum;
114  accum.push_back( 0 );
115 
116  int num_senders = (int)senderTasks.size();
117 
118  for( size_t k = 0; k < numElemsPerTaskInGroup1.size(); k++ )
119  {
120  total_elems += numElemsPerTaskInGroup1[k];
121  accum.push_back( total_elems );
122  }
123 
124  int num_recv = ( (int)receiverTasks.size() );
125  // in trivial partition, every receiver should get about total_elems/num_receivers elements
126  int num_per_receiver = (int)( total_elems / num_recv );
127  int leftover = total_elems - num_per_receiver * num_recv;
128 
129  // so receiver k will receive [starts[k], starts[k+1] ) interval
130  std::vector< int > starts;
131  starts.resize( num_recv + 1 );
132  starts[0] = 0;
133  for( int k = 0; k < num_recv; k++ )
134  {
135  starts[k + 1] = starts[k] + num_per_receiver;
136  if( k < leftover ) starts[k + 1]++;
137  }
138 
139  // each sender will send to a number of receivers, based on how the
140  // arrays starts[0:num_recv] and accum[0:sendr] overlap
141  int lastUsedReceiverRank = 0; // first receiver was not treated yet
142  for( int j = 0; j < num_senders; j++ )
143  {
144  // we could start the receiver loop with the latest receiver that received from previous
145  // sender
146  for( int k = lastUsedReceiverRank; k < num_recv; k++ )
147  {
148  // if overlap:
149  if( starts[k] < accum[j + 1] && starts[k + 1] > accum[j] )
150  {
151  recv_graph[receiverTasks[k]].push_back( senderTasks[j] );
152  sender_graph[senderTasks[j]].push_back( receiverTasks[k] );
153 
154  // we still need to decide what is the overlap
155  int sizeOverlap = 1; // at least 1, for sure
156  // 1
157  if( starts[k] >= accum[j] ) // one end is starts[k]
158  {
159  if( starts[k + 1] >= accum[j + 1] ) // the other end is accum[j+1]
160  sizeOverlap = accum[j + 1] - starts[k];
161  else //
162  sizeOverlap = starts[k + 1] - starts[k];
163  }
164  else // one end is accum[j]
165  {
166  if( starts[k + 1] >= accum[j + 1] ) // the other end is accum[j+1]
167  sizeOverlap = accum[j + 1] - accum[j];
168  else
169  sizeOverlap = starts[k + 1] - accum[j];
170  }
171  recv_sizes[receiverTasks[k]].push_back( sizeOverlap ); // basically, task k will receive from
172  // sender j, sizeOverlap elems
173  sender_sizes[senderTasks[j]].push_back( sizeOverlap );
174  if( starts[k] > accum[j + 1] )
175  {
176  lastUsedReceiverRank = k - 1; // so next k loop will start a little higher, we
177  // probably finished with first few receivers (up
178  // to receiver lastUsedReceiverRank)
179  break; // break the k loop, we distributed all elements from sender j to some
180  // receivers
181  }
182  }
183  }
184  }
185 
186  return MB_SUCCESS;
187 }

References MB_SUCCESS, receiverTasks, recv_graph, recv_sizes, sender_graph, sender_sizes, and senderTasks.

◆ dump_comm_information()

ErrorCode moab::ParCommGraph::dump_comm_information ( std::string  prefix,
int  is_send 
)

Definition at line 1601 of file ParCommGraph.cpp.

1602 {
1603  //
1604  if( -1 != rankInGroup1 && 1 == is_send ) // it is a sender task
1605  {
1606  std::ofstream dbfile;
1607  std::stringstream outf;
1608  outf << prefix << "_sender_" << rankInGroup1 << "_joint" << rankInJoin << "_type_" << (int)graph_type << ".txt";
1609  dbfile.open( outf.str().c_str() );
1610 
1611  if( graph_type == COVERAGE )
1612  {
1613  for( std::map< int, std::vector< int > >::iterator mit = involved_IDs_map.begin();
1614  mit != involved_IDs_map.end(); mit++ )
1615  {
1616  int receiver_proc = mit->first;
1617  std::vector< int >& eids = mit->second;
1618  dbfile << "receiver: " << receiver_proc << " size:" << eids.size() << "\n";
1619  }
1620  }
1621  else if( graph_type == INITIAL_MIGRATE ) // just after migration
1622  {
1623  for( std::map< int, Range >::iterator mit = split_ranges.begin(); mit != split_ranges.end(); mit++ )
1624  {
1625  int receiver_proc = mit->first;
1626  Range& eids = mit->second;
1627  dbfile << "receiver: " << receiver_proc << " size:" << eids.size() << "\n";
1628  }
1629  }
1630  else if( graph_type == DOF_BASED ) // just after migration, or from computeGraph
1631  {
1632  for( std::map< int, std::vector< int > >::iterator mit = involved_IDs_map.begin();
1633  mit != involved_IDs_map.end(); mit++ )
1634  {
1635  int receiver_proc = mit->first;
1636  dbfile << "receiver: " << receiver_proc << " size:" << mit->second.size() << "\n";
1637  }
1638  }
1639  dbfile.close();
1640  }
1641  if( -1 != rankInGroup2 && 0 == is_send ) // it is a receiver task
1642  {
1643  std::ofstream dbfile;
1644  std::stringstream outf;
1645  outf << prefix << "_receiver_" << rankInGroup2 << "_joint" << rankInJoin << "_type_" << (int)graph_type
1646  << ".txt";
1647  dbfile.open( outf.str().c_str() );
1648 
1649  if( graph_type == COVERAGE )
1650  {
1651  for( std::map< int, std::vector< int > >::iterator mit = involved_IDs_map.begin();
1652  mit != involved_IDs_map.end(); mit++ )
1653  {
1654  int sender_proc = mit->first;
1655  std::vector< int >& eids = mit->second;
1656  dbfile << "sender: " << sender_proc << " size:" << eids.size() << "\n";
1657  }
1658  }
1659  else if( graph_type == INITIAL_MIGRATE ) // just after migration
1660  {
1661  for( std::map< int, Range >::iterator mit = split_ranges.begin(); mit != split_ranges.end(); mit++ )
1662  {
1663  int sender_proc = mit->first;
1664  Range& eids = mit->second;
1665  dbfile << "sender: " << sender_proc << " size:" << eids.size() << "\n";
1666  }
1667  }
1668  else if( graph_type == DOF_BASED ) // just after migration
1669  {
1670  for( std::map< int, std::vector< int > >::iterator mit = involved_IDs_map.begin();
1671  mit != involved_IDs_map.end(); mit++ )
1672  {
1673  int sender_proc = mit->first;
1674  dbfile << "receiver: " << sender_proc << " size:" << mit->second.size() << "\n";
1675  }
1676  }
1677  dbfile.close();
1678  }
1679  return MB_SUCCESS;
1680 }

References COVERAGE, DOF_BASED, graph_type, INITIAL_MIGRATE, involved_IDs_map, MB_SUCCESS, rankInGroup1, rankInGroup2, rankInJoin, moab::Range::size(), and split_ranges.

◆ find_group_ranks()

void moab::ParCommGraph::find_group_ranks ( MPI_Group  group,
MPI_Comm  join,
std::vector< int > &  ranks 
)
private

find ranks of a group with respect to an encompassing communicator

Operations: Local, usually called on root process of the group

Parameters
[in]joincomm(MPI_Comm)
[in]group(MPI_Group)
[out]ranks( std::vector<int>) ranks with respect to the joint communicator

Definition at line 81 of file ParCommGraph.cpp.

82 {
83  MPI_Group global_grp;
84  MPI_Comm_group( joincomm, &global_grp );
85 
86  int grp_size;
87 
88  MPI_Group_size( group, &grp_size );
89  std::vector< int > rks( grp_size );
90  ranks.resize( grp_size );
91 
92  for( int i = 0; i < grp_size; i++ )
93  rks[i] = i;
94 
95  MPI_Group_translate_ranks( group, grp_size, &rks[0], global_grp, &ranks[0] );
96  MPI_Group_free( &global_grp );
97  return;
98 }

Referenced by ParCommGraph().

◆ form_mesh_from_tuples()

ErrorCode moab::ParCommGraph::form_mesh_from_tuples ( Interface mb,
TupleList TLv,
TupleList TLc,
int  type,
int  lenTagType1,
EntityHandle  fset,
Range primary_ents,
std::vector< int > &  values_entities 
)

Definition at line 1391 of file ParCommGraph.cpp.

1399 {
1400  // might need to fill also the split_range things
1401  // we will always need GlobalID tag
1402  Tag gidtag = mb->globalId_tag();
1403  // for Type1, we need GLOBAL_DOFS tag;
1404  Tag gds;
1405  ErrorCode rval;
1406  std::vector< int > def_val( lenTagType1, 0 );
1407  if( type == 1 )
1408  {
1409  // we may need to create this tag
1410  rval = mb->tag_get_handle( "GLOBAL_DOFS", lenTagType1, MB_TYPE_INTEGER, gds, MB_TAG_CREAT | MB_TAG_DENSE,
1411  &def_val[0] );MB_CHK_ERR( rval );
1412  }
1413 
1414  std::map< int, EntityHandle > vertexMap; //
1415  Range verts;
1416  // always form vertices and add them to the fset;
1417  int n = TLv.get_n();
1419  for( int i = 0; i < n; i++ )
1420  {
1421  int gid = TLv.vi_rd[2 * i + 1];
1422  if( vertexMap.find( gid ) == vertexMap.end() )
1423  {
1424  // need to form this vertex
1425  rval = mb->create_vertex( &( TLv.vr_rd[3 * i] ), vertex );MB_CHK_ERR( rval );
1426  vertexMap[gid] = vertex;
1427  verts.insert( vertex );
1428  rval = mb->tag_set_data( gidtag, &vertex, 1, &gid );MB_CHK_ERR( rval );
1429  // if point cloud,
1430  }
1431  if( 2 == type ) // point cloud, add it to the split_ranges ?
1432  {
1433  split_ranges[TLv.vi_rd[2 * i]].insert( vertexMap[gid] );
1434  }
1435  // todo : when to add the values_entities ?
1436  }
1437  rval = mb->add_entities( fset, verts );MB_CHK_ERR( rval );
1438  if( 2 == type )
1439  {
1440  primary_ents = verts;
1441  values_entities.resize( verts.size() ); // just get the ids of vertices
1442  rval = mb->tag_get_data( gidtag, verts, &values_entities[0] );MB_CHK_ERR( rval );
1443  return MB_SUCCESS;
1444  }
1445 
1446  n = TLc.get_n();
1447  int size_tuple = 2 + ( ( type != 1 ) ? 0 : lenTagType1 ) + 1 + 10; // 10 is the max number of vertices in cell
1448 
1449  EntityHandle new_element;
1450  Range cells;
1451  std::map< int, EntityHandle > cellMap; // do no tcreate one if it already exists, maybe from other processes
1452  for( int i = 0; i < n; i++ )
1453  {
1454  int from_proc = TLc.vi_rd[size_tuple * i];
1455  int globalIdEl = TLc.vi_rd[size_tuple * i + 1];
1456  if( cellMap.find( globalIdEl ) == cellMap.end() ) // need to create the cell
1457  {
1458  int current_index = 2;
1459  if( 1 == type ) current_index += lenTagType1;
1460  int nnodes = TLc.vi_rd[size_tuple * i + current_index];
1461  std::vector< EntityHandle > conn;
1462  conn.resize( nnodes );
1463  for( int j = 0; j < nnodes; j++ )
1464  {
1465  conn[j] = vertexMap[TLc.vi_rd[size_tuple * i + current_index + j + 1]];
1466  }
1467  //
1468  EntityType entType = MBQUAD;
1469  if( nnodes > 4 ) entType = MBPOLYGON;
1470  if( nnodes < 4 ) entType = MBTRI;
1471  rval = mb->create_element( entType, &conn[0], nnodes, new_element );MB_CHK_SET_ERR( rval, "can't create new element " );
1472  cells.insert( new_element );
1473  cellMap[globalIdEl] = new_element;
1474  rval = mb->tag_set_data( gidtag, &new_element, 1, &globalIdEl );MB_CHK_SET_ERR( rval, "can't set global id tag on cell " );
1475  if( 1 == type )
1476  {
1477  // set the gds tag
1478  rval = mb->tag_set_data( gds, &new_element, 1, &( TLc.vi_rd[size_tuple * i + 2] ) );MB_CHK_SET_ERR( rval, "can't set gds tag on cell " );
1479  }
1480  }
1481  split_ranges[from_proc].insert( cellMap[globalIdEl] );
1482  }
1483  rval = mb->add_entities( fset, cells );MB_CHK_ERR( rval );
1484  primary_ents = cells;
1485  if( 1 == type )
1486  {
1487  values_entities.resize( lenTagType1 * primary_ents.size() );
1488  rval = mb->tag_get_data( gds, primary_ents, &values_entities[0] );MB_CHK_ERR( rval );
1489  }
1490  else // type == 3
1491  {
1492  values_entities.resize( primary_ents.size() ); // just get the ids !
1493  rval = mb->tag_get_data( gidtag, primary_ents, &values_entities[0] );MB_CHK_ERR( rval );
1494  }
1495  return MB_SUCCESS;
1496 }

References ErrorCode, moab::TupleList::get_n(), moab::Range::insert(), mb, MB_CHK_ERR, MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_DENSE, MB_TYPE_INTEGER, MBPOLYGON, MBQUAD, MBTRI, moab::Range::size(), split_ranges, moab::TupleList::vi_rd, and moab::TupleList::vr_rd.

◆ form_tuples_to_migrate_mesh()

ErrorCode moab::ParCommGraph::form_tuples_to_migrate_mesh ( Interface mb,
TupleList TLv,
TupleList TLc,
int  type,
int  lenTagType1 
)

Definition at line 1304 of file ParCommGraph.cpp.

1309 {
1310  // we will always need GlobalID tag
1311  Tag gidtag = mb->globalId_tag();
1312  // for Type1, we need GLOBAL_DOFS tag;
1313  Tag gds;
1314  ErrorCode rval;
1315  if( type == 1 )
1316  {
1317  rval = mb->tag_get_handle( "GLOBAL_DOFS", gds );
1318  }
1319  // find vertices to be sent, for each of the split_ranges procs
1320  std::map< int, Range > verts_to_proc;
1321  int numv = 0, numc = 0;
1322  for( auto it = split_ranges.begin(); it != split_ranges.end(); it++ )
1323  {
1324  int to_proc = it->first;
1325  Range verts;
1326  if( type != 2 )
1327  {
1328  rval = mb->get_connectivity( it->second, verts );MB_CHK_ERR( rval );
1329  numc += (int)it->second.size();
1330  }
1331  else
1332  verts = it->second;
1333  verts_to_proc[to_proc] = verts;
1334  numv += (int)verts.size();
1335  }
1336  // first vertices:
1337  TLv.initialize( 2, 0, 0, 3, numv ); // to proc, GLOBAL ID, 3 real coordinates
1338  TLv.enableWriteAccess();
1339  // use the global id of vertices for connectivity
1340  for( auto it = verts_to_proc.begin(); it != verts_to_proc.end(); it++ )
1341  {
1342  int to_proc = it->first;
1343  Range& verts = it->second;
1344  for( Range::iterator vit = verts.begin(); vit != verts.end(); ++vit )
1345  {
1346  EntityHandle v = *vit;
1347  int n = TLv.get_n(); // current size of tuple list
1348  TLv.vi_wr[2 * n] = to_proc; // send to processor
1349 
1350  rval = mb->tag_get_data( gidtag, &v, 1, &( TLv.vi_wr[2 * n + 1] ) );MB_CHK_ERR( rval );
1351  rval = mb->get_coords( &v, 1, &( TLv.vr_wr[3 * n] ) );MB_CHK_ERR( rval );
1352  TLv.inc_n(); // increment tuple list size
1353  }
1354  }
1355  if( type == 2 ) return MB_SUCCESS; // no need to fill TLc
1356  // to proc, ID cell, gdstag, nbv, id conn,
1357  int size_tuple = 2 + ( ( type != 1 ) ? 0 : lenTagType1 ) + 1 + 10; // 10 is the max number of vertices in cell
1358 
1359  std::vector< int > gdvals;
1360 
1361  TLc.initialize( size_tuple, 0, 0, 0, numc ); // to proc, GLOBAL ID, 3 real coordinates
1362  TLc.enableWriteAccess();
1363  for( auto it = split_ranges.begin(); it != split_ranges.end(); it++ )
1364  {
1365  int to_proc = it->first;
1366  Range& cells = it->second;
1367  for( Range::iterator cit = cells.begin(); cit != cells.end(); ++cit )
1368  {
1369  EntityHandle cell = *cit;
1370  int n = TLc.get_n(); // current size of tuple list
1371  TLc.vi_wr[size_tuple * n] = to_proc;
1372  int current_index = 2;
1373  rval = mb->tag_get_data( gidtag, &cell, 1, &( TLc.vi_wr[size_tuple * n + 1] ) );MB_CHK_ERR( rval );
1374  if( 1 == type )
1375  {
1376  rval = mb->tag_get_data( gds, &cell, 1, &( TLc.vi_wr[size_tuple * n + current_index] ) );MB_CHK_ERR( rval );
1377  current_index += lenTagType1;
1378  }
1379  // now get connectivity
1380  const EntityHandle* conn = NULL;
1381  int nnodes = 0;
1382  rval = mb->get_connectivity( cell, conn, nnodes );MB_CHK_ERR( rval );
1383  // fill nnodes:
1384  TLc.vi_wr[size_tuple * n + current_index] = nnodes;
1385  rval = mb->tag_get_data( gidtag, conn, nnodes, &( TLc.vi_wr[size_tuple * n + current_index + 1] ) );MB_CHK_ERR( rval );
1386  TLc.inc_n(); // increment tuple list size
1387  }
1388  }
1389  return MB_SUCCESS;
1390 }

References moab::Range::begin(), moab::TupleList::enableWriteAccess(), moab::Range::end(), ErrorCode, moab::TupleList::get_n(), moab::TupleList::inc_n(), moab::TupleList::initialize(), mb, MB_CHK_ERR, MB_SUCCESS, moab::Range::size(), split_ranges, moab::TupleList::vi_wr, and moab::TupleList::vr_wr.

◆ get_component_id1()

int moab::ParCommGraph::get_component_id1 ( )
inline

Definition at line 153 of file ParCommGraph.hpp.

154  {
155  return compid1;
156  }

References compid1.

◆ get_component_id2()

int moab::ParCommGraph::get_component_id2 ( )
inline

Definition at line 157 of file ParCommGraph.hpp.

158  {
159  return compid2;
160  }

References compid2.

◆ get_context_id()

int moab::ParCommGraph::get_context_id ( )
inline

Definition at line 162 of file ParCommGraph.hpp.

163  {
164  return context_id;
165  }

References context_id.

◆ get_cover_set()

EntityHandle moab::ParCommGraph::get_cover_set ( )
inline

Definition at line 171 of file ParCommGraph.hpp.

172  {
173  return cover_set;
174  }

References cover_set.

◆ is_root_receiver()

bool moab::ParCommGraph::is_root_receiver ( )
inline

Definition at line 138 of file ParCommGraph.hpp.

139  {
140  return rootReceiver;
141  }

References rootReceiver.

◆ is_root_sender()

bool moab::ParCommGraph::is_root_sender ( )
inline

Definition at line 133 of file ParCommGraph.hpp.

134  {
135  return rootSender;
136  }

References rootSender.

Referenced by send_graph(), send_graph_partition(), and send_mesh_parts().

◆ pack_receivers_graph()

ErrorCode moab::ParCommGraph::pack_receivers_graph ( std::vector< int > &  packed_recv_array)

pack information about receivers view of the graph, for future sending to receiver root

Operations: Local, called on root process of the senders group

Parameters
[out]packed_recv_arraypacked data will be sent to the root of receivers, and distributed from there, and will have this information, for each receiver, concatenated receiver 1 task, number of senders for receiver 1, then sender tasks for receiver 1, receiver 2 task, number of senders for receiver 2, sender tasks for receiver 2, etc Note: only the root of senders will compute this, and send it over to the receiver root, which will distribute it over each receiver; We do not pack the sizes of data to be sent, only the senders for each of the receivers (could be of size O(n^2) , where n is the number of tasks ; but in general, it should be more like O(n) ). Each sender sends to a "finite" number of receivers, and each receiver receives from a finite number of senders). We need this info to decide how to set up the send/receive waiting game for non-blocking communication )

Definition at line 189 of file ParCommGraph.cpp.

190 {
191  // it will basically look at local data, to pack communication graph, each receiver task will
192  // have to post receives for each sender task that will send data to it; the array will be
193  // communicated to root receiver, and eventually distributed to receiver tasks
194 
195  /*
196  * packed_array will have receiver, number of senders, then senders, etc
197  */
198  if( recv_graph.size() < receiverTasks.size() )
199  {
200  // big problem, we have empty partitions in receive
201  std::cout << " WARNING: empty partitions, some receiver tasks will receive nothing.\n";
202  }
203  for( std::map< int, std::vector< int > >::iterator it = recv_graph.begin(); it != recv_graph.end(); it++ )
204  {
205  int recv = it->first;
206  std::vector< int >& senders = it->second;
207  packed_recv_array.push_back( recv );
208  packed_recv_array.push_back( (int)senders.size() );
209 
210  for( int k = 0; k < (int)senders.size(); k++ )
211  packed_recv_array.push_back( senders[k] );
212  }
213 
214  return MB_SUCCESS;
215 }

References MB_SUCCESS, receiverTasks, recv_graph, and senders().

Referenced by send_graph().

◆ receive_comm_graph()

ErrorCode moab::ParCommGraph::receive_comm_graph ( MPI_Comm  jcomm,
ParallelComm pco,
std::vector< int > &  pack_array 
)

Definition at line 357 of file ParCommGraph.cpp.

358 {
359  // first, receive from sender_rank 0, the communication graph (matrix), so each receiver
360  // knows what data to expect
361  MPI_Comm receive = pco->comm();
362  int size_pack_array, ierr;
363  MPI_Status status;
364  if( rootReceiver )
365  {
366  /*
367  * MPI_Probe(
368  int source,
369  int tag,
370  MPI_Comm comm,
371  MPI_Status* status)
372  *
373  */
374  ierr = MPI_Probe( sender( 0 ), 20, jcomm, &status );
375  if( 0 != ierr )
376  {
377  std::cout << " MPI_Probe failure: " << ierr << "\n";
378  return MB_FAILURE;
379  }
380  // get the count of data received from the MPI_Status structure
381  ierr = MPI_Get_count( &status, MPI_INT, &size_pack_array );
382  if( 0 != ierr )
383  {
384  std::cout << " MPI_Get_count failure: " << ierr << "\n";
385  return MB_FAILURE;
386  }
387 #ifdef VERBOSE
388  std::cout << " receive comm graph size: " << size_pack_array << "\n";
389 #endif
390  pack_array.resize( size_pack_array );
391  ierr = MPI_Recv( &pack_array[0], size_pack_array, MPI_INT, sender( 0 ), 20, jcomm, &status );
392  if( 0 != ierr ) return MB_FAILURE;
393 #ifdef VERBOSE
394  std::cout << " receive comm graph ";
395  for( int k = 0; k < (int)pack_array.size(); k++ )
396  std::cout << " " << pack_array[k];
397  std::cout << "\n";
398 #endif
399  }
400 
401  // now broadcast this whole array to all receivers, so they know what to expect
402  ierr = MPI_Bcast( &size_pack_array, 1, MPI_INT, 0, receive );
403  if( 0 != ierr ) return MB_FAILURE;
404  pack_array.resize( size_pack_array );
405  ierr = MPI_Bcast( &pack_array[0], size_pack_array, MPI_INT, 0, receive );
406  if( 0 != ierr ) return MB_FAILURE;
407  return MB_SUCCESS;
408 }

References moab::ParallelComm::comm(), MB_SUCCESS, rootReceiver, and sender().

◆ receive_mesh()

ErrorCode moab::ParCommGraph::receive_mesh ( MPI_Comm  jcomm,
ParallelComm pco,
EntityHandle  local_set,
std::vector< int > &  senders_local 
)

Definition at line 410 of file ParCommGraph.cpp.

414 {
415  ErrorCode rval;
416  int ierr;
417  MPI_Status status;
418  // we also need to fill corresponding mesh info on the other side
419  corr_tasks = senders_local;
420  Range newEnts;
421 
422  Tag orgSendProcTag; // this will be a tag set on the received mesh, with info about from what
423  // task / PE the
424  // primary element came from, in the joint communicator ; this will be forwarded by coverage
425  // mesh
426  int defaultInt = -1; // no processor, so it was not migrated from somewhere else
427  rval = pco->get_moab()->tag_get_handle( "orig_sending_processor", 1, MB_TYPE_INTEGER, orgSendProcTag,
428  MB_TAG_DENSE | MB_TAG_CREAT, &defaultInt );MB_CHK_SET_ERR( rval, "can't create original sending processor tag" );
429  if( !senders_local.empty() )
430  {
431  for( size_t k = 0; k < senders_local.size(); k++ )
432  {
433  int sender1 = senders_local[k];
434  // first receive the size of the buffer using probe
435  /*
436  * MPI_Probe(
437  int source,
438  int tag,
439  MPI_Comm comm,
440  MPI_Status* status)
441  *
442  */
443  ierr = MPI_Probe( sender1, 2, jcomm, &status );
444  if( 0 != ierr )
445  {
446  std::cout << " MPI_Probe failure in ParCommGraph::receive_mesh " << ierr << "\n";
447  return MB_FAILURE;
448  }
449  // get the count of data received from the MPI_Status structure
450  int size_pack;
451  ierr = MPI_Get_count( &status, MPI_CHAR, &size_pack );
452  if( 0 != ierr )
453  {
454  std::cout << " MPI_Get_count failure in ParCommGraph::receive_mesh " << ierr << "\n";
455  return MB_FAILURE;
456  }
457 
458  /* ierr = MPI_Recv (&size_pack, 1, MPI_INT, sender1, 1, jcomm, &status);
459  if (0!=ierr) return MB_FAILURE;*/
460  // now resize the buffer, then receive it
461  ParallelComm::Buffer* buffer = new ParallelComm::Buffer( size_pack );
462  // buffer->reserve(size_pack);
463 
464  ierr = MPI_Recv( buffer->mem_ptr, size_pack, MPI_CHAR, sender1, 2, jcomm, &status );
465  if( 0 != ierr )
466  {
467  std::cout << " MPI_Recv failure in ParCommGraph::receive_mesh " << ierr << "\n";
468  return MB_FAILURE;
469  }
470  // now unpack the buffer we just received
471  Range entities;
472  std::vector< std::vector< EntityHandle > > L1hloc, L1hrem;
473  std::vector< std::vector< int > > L1p;
474  std::vector< EntityHandle > L2hloc, L2hrem;
475  std::vector< unsigned int > L2p;
476 
477  buffer->reset_ptr( sizeof( int ) );
478  std::vector< EntityHandle > entities_vec( entities.size() );
479  std::copy( entities.begin(), entities.end(), entities_vec.begin() );
480  rval = pco->unpack_buffer( buffer->buff_ptr, false, -1, -1, L1hloc, L1hrem, L1p, L2hloc, L2hrem, L2p,
481  entities_vec );
482  delete buffer;
483  if( MB_SUCCESS != rval ) return rval;
484 
485  std::copy( entities_vec.begin(), entities_vec.end(), range_inserter( entities ) );
486  // we have to add them to the local set
487  rval = pco->get_moab()->add_entities( local_set, entities );
488  if( MB_SUCCESS != rval ) return rval;
489  // corr_sizes is the size of primary entities received
490  Range verts = entities.subset_by_dimension( 0 );
491  Range local_primary_ents = subtract( entities, verts );
492  if( local_primary_ents.empty() )
493  {
494  // it is possible that all ents sent were vertices (point cloud)
495  // then consider primary entities the vertices
496  local_primary_ents = verts;
497  }
498  else
499  {
500  // set a tag with the original sender for the primary entity
501  // will be used later for coverage mesh
502  std::vector< int > orig_senders( local_primary_ents.size(), sender1 );
503  rval = pco->get_moab()->tag_set_data( orgSendProcTag, local_primary_ents, &orig_senders[0] );
504  }
505  corr_sizes.push_back( (int)local_primary_ents.size() );
506 
507  newEnts.merge( entities );
508  // make these in split ranges
509  split_ranges[sender1] = local_primary_ents;
510 
511 #ifdef VERBOSE
512  std::ostringstream partial_outFile;
513 
514  partial_outFile << "part_send_" << sender1 << "."
515  << "recv" << rankInJoin << ".vtk";
516 
517  // the mesh contains ghosts too, but they are not part of mat/neumann set
518  // write in serial the file, to see what tags are missing
519  std::cout << " writing from receiver " << rankInJoin << " from sender " << sender1
520  << " entities: " << entities.size() << std::endl;
521  rval = pco->get_moab()->write_file( partial_outFile.str().c_str(), 0, 0, &local_set,
522  1 ); // everything on local set received
523  if( MB_SUCCESS != rval ) return rval;
524 #endif
525  }
526  }
527  // make sure adjacencies are updated on the new elements
528 
529  if( newEnts.empty() )
530  {
531  std::cout << " WARNING: this task did not receive any entities \n";
532  }
533  // in order for the merging to work, we need to be sure that the adjacencies are updated
534  // (created)
535  Range local_verts = newEnts.subset_by_type( MBVERTEX );
536  newEnts = subtract( newEnts, local_verts );
537  Core* mb = (Core*)pco->get_moab();
538  AEntityFactory* adj_fact = mb->a_entity_factory();
539  if( !adj_fact->vert_elem_adjacencies() )
540  adj_fact->create_vert_elem_adjacencies();
541  else
542  {
543  for( Range::iterator it = newEnts.begin(); it != newEnts.end(); ++it )
544  {
545  EntityHandle eh = *it;
546  const EntityHandle* conn = NULL;
547  int num_nodes = 0;
548  rval = mb->get_connectivity( eh, conn, num_nodes );MB_CHK_ERR( rval );
549  adj_fact->notify_create_entity( eh, conn, num_nodes );
550  }
551  }
552 
553  return MB_SUCCESS;
554 }

References moab::Interface::add_entities(), moab::Range::begin(), buffer, corr_sizes, corr_tasks, moab::AEntityFactory::create_vert_elem_adjacencies(), moab::Range::empty(), moab::Range::end(), entities, ErrorCode, moab::ParallelComm::get_moab(), mb, MB_CHK_ERR, MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_DENSE, MB_TYPE_INTEGER, MBVERTEX, moab::Range::merge(), moab::AEntityFactory::notify_create_entity(), rankInJoin, moab::Range::size(), split_ranges, moab::Range::subset_by_type(), moab::subtract(), moab::Interface::tag_get_handle(), moab::Interface::tag_set_data(), moab::ParallelComm::unpack_buffer(), moab::AEntityFactory::vert_elem_adjacencies(), and moab::Interface::write_file().

◆ receive_tag_values()

ErrorCode moab::ParCommGraph::receive_tag_values ( MPI_Comm  jcomm,
ParallelComm pco,
Range owned,
std::vector< Tag > &  tag_handles 
)

Get the size of the specified tag in bytes

Definition at line 782 of file ParCommGraph.cpp.

786 {
787  // opposite to sending, we will use blocking receives
788  int ierr;
789  MPI_Status status;
790  // basically, owned.size() needs to be equal to sum(corr_sizes)
791  // get info about the tag size, type, etc
792  Core* mb = (Core*)pco->get_moab();
793  // get info about the tag
794  //! Get the size of the specified tag in bytes
795  ErrorCode rval;
796  int total_bytes_per_entity = 0;
797  std::vector< int > vect_bytes_per_tag;
798 #ifdef VERBOSE
799  std::vector< int > tag_sizes;
800 #endif
801  for( size_t i = 0; i < tag_handles.size(); i++ )
802  {
803  int bytes_per_tag;
804  rval = mb->tag_get_bytes( tag_handles[i], bytes_per_tag );MB_CHK_ERR( rval );
805  total_bytes_per_entity += bytes_per_tag;
806  vect_bytes_per_tag.push_back( bytes_per_tag );
807 #ifdef VERBOSE
808  int tag_size;
809  rval = mb->tag_get_length( tag_handles[i], tag_size );MB_CHK_ERR( rval );
810  tag_sizes.push_back( tag_size );
811 #endif
812  }
813 
814  if( graph_type == INITIAL_MIGRATE )
815  {
816  // std::map<int, Range> split_ranges;
817  // rval = split_owned_range ( owned);MB_CHK_ERR ( rval );
818 
819  // use the buffers data structure to allocate memory for receiving the tags
820  for( std::map< int, Range >::iterator it = split_ranges.begin(); it != split_ranges.end(); it++ )
821  {
822  int sender_proc = it->first;
823  Range ents = it->second; // primary entities, with the tag data, we will receive
824  int size_buffer = 4 + total_bytes_per_entity *
825  (int)ents.size(); // hopefully, below 2B; if more, we have a big problem ...
826  ParallelComm::Buffer* buffer = new ParallelComm::Buffer( size_buffer );
827 
828  buffer->reset_ptr( sizeof( int ) );
829 
830  *( (int*)buffer->mem_ptr ) = size_buffer;
831  // int size_pack = buffer->get_current_size(); // debug check
832 
833  ierr = MPI_Recv( buffer->mem_ptr, size_buffer, MPI_CHAR, sender_proc, 222, jcomm, &status );
834  if( ierr != 0 ) return MB_FAILURE;
835  // now set the tag
836  // copy to tag
837 
838  for( size_t i = 0; i < tag_handles.size(); i++ )
839  {
840  rval = mb->tag_set_data( tag_handles[i], ents, (void*)( buffer->buff_ptr ) );
841  buffer->buff_ptr += vect_bytes_per_tag[i] * ents.size();
842  }
843  delete buffer; // no need for it afterwards
844  MB_CHK_ERR( rval );
845  }
846  }
847  else if( graph_type == COVERAGE ) // receive buffer, then extract tag data, in a loop
848  {
849  // we know that we will need to receive some tag data in a specific order (by ids stored)
850  // first, get the ids of the local elements, from owned Range; unpack the buffer in order
851  Tag gidTag = mb->globalId_tag();
852  std::vector< int > gids;
853  gids.resize( owned.size() );
854  rval = mb->tag_get_data( gidTag, owned, &gids[0] );MB_CHK_ERR( rval );
855  std::map< int, EntityHandle > gidToHandle;
856  size_t i = 0;
857  for( Range::iterator it = owned.begin(); it != owned.end(); it++ )
858  {
859  EntityHandle eh = *it;
860  gidToHandle[gids[i++]] = eh;
861  }
862  //
863  // now, unpack the data and set it to the tag
864  for( std::map< int, std::vector< int > >::iterator mit = involved_IDs_map.begin();
865  mit != involved_IDs_map.end(); mit++ )
866  {
867  int sender_proc = mit->first;
868  std::vector< int >& eids = mit->second;
869  int size_buffer = 4 + total_bytes_per_entity *
870  (int)eids.size(); // hopefully, below 2B; if more, we have a big problem ...
871  ParallelComm::Buffer* buffer = new ParallelComm::Buffer( size_buffer );
872  buffer->reset_ptr( sizeof( int ) );
873  *( (int*)buffer->mem_ptr ) = size_buffer; // this is really not necessary, it should receive this too
874 
875  // receive the buffer
876  ierr = MPI_Recv( buffer->mem_ptr, size_buffer, MPI_CHAR, sender_proc, 222, jcomm, &status );
877  if( ierr != 0 ) return MB_FAILURE;
878 // start copy
879 #ifdef VERBOSE
880  std::ofstream dbfile;
881  std::stringstream outf;
882  outf << "recvFrom_" << sender_proc << "_on_proc_" << rankInJoin << ".txt";
883  dbfile.open( outf.str().c_str() );
884  dbfile << "recvFrom_" << sender_proc << " on proc " << rankInJoin << "\n";
885 #endif
886 
887  // copy tag data from buffer->buff_ptr
888  // data is arranged by tag , and repeat the loop for each entity ()
889  // maybe it should be arranged by entity now, not by tag (so one loop for entities,
890  // outside)
891 
892  for( std::vector< int >::iterator it = eids.begin(); it != eids.end(); it++ )
893  {
894  int eID = *it;
895  std::map< int, EntityHandle >::iterator mit2 = gidToHandle.find( eID );
896  if( mit2 == gidToHandle.end() )
897  {
898  std::cout << " on rank: " << rankInJoin << " cannot find entity handle with global ID " << eID
899  << "\n";
900  return MB_FAILURE;
901  }
902  EntityHandle eh = mit2->second;
903  for( i = 0; i < tag_handles.size(); i++ )
904  {
905  rval = mb->tag_set_data( tag_handles[i], &eh, 1, (void*)( buffer->buff_ptr ) );MB_CHK_ERR( rval );
906 #ifdef VERBOSE
907  dbfile << "global ID " << eID << " local handle " << mb->id_from_handle( eh ) << " vals: ";
908  double* vals = (double*)( buffer->buff_ptr );
909  for( int kk = 0; kk < tag_sizes[i]; kk++ )
910  {
911  dbfile << " " << *vals;
912  vals++;
913  }
914  dbfile << "\n";
915 #endif
916  buffer->buff_ptr += vect_bytes_per_tag[i];
917  }
918  }
919 
920  // delete receive buffer
921  delete buffer;
922 #ifdef VERBOSE
923  dbfile.close();
924 #endif
925  }
926  }
927  else if( graph_type == DOF_BASED )
928  {
929  // need to fill up the values for each tag, in the order desired, from the buffer received
930  //
931  // get all the tags, for all owned entities, and pack the buffers accordingly
932  // we do not want to get the tags by entity, it may be too expensive
933  std::vector< std::vector< double > > valuesTags;
934  valuesTags.resize( tag_handles.size() );
935  for( size_t i = 0; i < tag_handles.size(); i++ )
936  {
937  int bytes_per_tag;
938  rval = mb->tag_get_bytes( tag_handles[i], bytes_per_tag );MB_CHK_ERR( rval );
939  valuesTags[i].resize( owned.size() * bytes_per_tag / sizeof( double ) );
940  // fill the whole array, we will pick up from here
941  // we will fill this array, using data from received buffer
942  // rval = mb->tag_get_data(owned, (void*)( &(valuesTags[i][0])) );MB_CHK_ERR ( rval );
943  }
944  // now, unpack the data and set the tags
945  sendReqs.resize( involved_IDs_map.size() );
946  for( std::map< int, std::vector< int > >::iterator mit = involved_IDs_map.begin();
947  mit != involved_IDs_map.end(); mit++ )
948  {
949  int sender_proc = mit->first;
950  std::vector< int >& eids = mit->second;
951  std::vector< int >& index_in_values = map_index[sender_proc];
952  std::vector< int >& index_ptr = map_ptr[sender_proc]; // this is eids.size()+1;
953  int size_buffer = 4 + total_bytes_per_entity *
954  (int)eids.size(); // hopefully, below 2B; if more, we have a big problem ...
955  ParallelComm::Buffer* buffer = new ParallelComm::Buffer( size_buffer );
956  buffer->reset_ptr( sizeof( int ) );
957 
958  // receive the buffer
959  ierr = MPI_Recv( buffer->mem_ptr, size_buffer, MPI_CHAR, sender_proc, 222, jcomm, &status );
960  if( ierr != 0 ) return MB_FAILURE;
961  // use the values in buffer to populate valuesTag arrays, fill it up!
962  int j = 0;
963  for( std::vector< int >::iterator it = eids.begin(); it != eids.end(); it++, j++ )
964  {
965  for( size_t i = 0; i < tag_handles.size(); i++ )
966  {
967  // right now, move just doubles; but it could be any type of tag
968  double val = *( (double*)( buffer->buff_ptr ) );
969  buffer->buff_ptr += 8; // we know we are working with doubles only !!!
970  for( int k = index_ptr[j]; k < index_ptr[j + 1]; k++ )
971  valuesTags[i][index_in_values[k]] = val;
972  }
973  }
974  // we are done with the buffer in which we received tags, release / delete it
975  delete buffer;
976  }
977  // now we populated the values for all tags; set now the tags!
978  for( size_t i = 0; i < tag_handles.size(); i++ )
979  {
980  // we will fill this array, using data from received buffer
981  rval = mb->tag_set_data( tag_handles[i], owned, (void*)( &( valuesTags[i][0] ) ) );MB_CHK_ERR( rval );
982  }
983  }
984  return MB_SUCCESS;
985 }

References moab::Range::begin(), buffer, COVERAGE, DOF_BASED, moab::Range::end(), ErrorCode, moab::ParallelComm::get_moab(), graph_type, INITIAL_MIGRATE, involved_IDs_map, map_index, map_ptr, mb, MB_CHK_ERR, MB_SUCCESS, rankInJoin, sendReqs, moab::Range::size(), and split_ranges.

◆ receiver()

int moab::ParCommGraph::receiver ( int  index)
inline

Definition at line 148 of file ParCommGraph.hpp.

149  {
150  return receiverTasks[index];
151  }

References receiverTasks.

Referenced by send_graph().

◆ receivers()

const std::vector< int >& moab::ParCommGraph::receivers ( )
inline

Definition at line 210 of file ParCommGraph.hpp.

211  {
212  return receiverTasks;
213  }

References receiverTasks.

Referenced by split_owned_range().

◆ release_send_buffers()

ErrorCode moab::ParCommGraph::release_send_buffers ( )

Definition at line 557 of file ParCommGraph.cpp.

558 {
559  int ierr, nsize = (int)sendReqs.size();
560  std::vector< MPI_Status > mult_status;
561  mult_status.resize( sendReqs.size() );
562  ierr = MPI_Waitall( nsize, &sendReqs[0], &mult_status[0] );
563 
564  if( ierr != 0 ) return MB_FAILURE;
565  // now we can free all buffers
566  delete[] comm_graph;
567  comm_graph = NULL;
568  std::vector< ParallelComm::Buffer* >::iterator vit;
569  for( vit = localSendBuffs.begin(); vit != localSendBuffs.end(); ++vit )
570  delete( *vit );
571  localSendBuffs.clear();
572  return MB_SUCCESS;
573 }

References comm_graph, localSendBuffs, MB_SUCCESS, and sendReqs.

◆ send_graph()

ErrorCode moab::ParCommGraph::send_graph ( MPI_Comm  jcomm)

use tag 10 to send size and tag 20 to send the packed array

Definition at line 265 of file ParCommGraph.cpp.

266 {
267  if( is_root_sender() )
268  {
269  int ierr;
270  // will need to build a communication graph, because each sender knows now to which receiver
271  // to send data the receivers need to post receives for each sender that will send data to
272  // them will need to gather on rank 0 on the sender comm, global ranks of sender with
273  // receivers to send build communication matrix, each receiver will receive from what sender
274 
275  std::vector< int > packed_recv_array;
276  ErrorCode rval = pack_receivers_graph( packed_recv_array );
277  if( MB_SUCCESS != rval ) return rval;
278 
279  int size_pack_array = (int)packed_recv_array.size();
280  comm_graph = new int[size_pack_array + 1];
281  comm_graph[0] = size_pack_array;
282  for( int k = 0; k < size_pack_array; k++ )
283  comm_graph[k + 1] = packed_recv_array[k];
284  // will add 2 requests
285  /// use tag 10 to send size and tag 20 to send the packed array
286  sendReqs.resize( 1 );
287  // do not send the size in advance, because we use probe now
288  /*ierr = MPI_Isend(&comm_graph[0], 1, MPI_INT, receiver(0), 10, jcomm, &sendReqs[0]); // we
289  have to use global communicator if (ierr!=0) return MB_FAILURE;*/
290  ierr = MPI_Isend( &comm_graph[1], size_pack_array, MPI_INT, receiver( 0 ), 20, jcomm,
291  &sendReqs[0] ); // we have to use global communicator
292  if( ierr != 0 ) return MB_FAILURE;
293  }
294  return MB_SUCCESS;
295 }

References comm_graph, ErrorCode, is_root_sender(), MB_SUCCESS, pack_receivers_graph(), receiver(), and sendReqs.

Referenced by send_graph_partition().

◆ send_graph_partition()

ErrorCode moab::ParCommGraph::send_graph_partition ( ParallelComm pco,
MPI_Comm  jcomm 
)

Definition at line 1500 of file ParCommGraph.cpp.

1501 {
1502  // first, accumulate the info to root of sender; use gatherv
1503  // first, accumulate number of receivers from each sender, to the root receiver
1504  int numberReceivers =
1505  (int)split_ranges.size(); // these are ranges of elements to be sent to each receiver, from this task
1506  int nSenders = (int)senderTasks.size(); //
1507  // this sender will have to send to this many receivers
1508  std::vector< int > displs( 1 ); // displacements for gatherv
1509  std::vector< int > counts( 1 );
1510  if( is_root_sender() )
1511  {
1512  displs.resize( nSenders + 1 );
1513  counts.resize( nSenders );
1514  }
1515 
1516  int ierr = MPI_Gather( &numberReceivers, 1, MPI_INT, &counts[0], 1, MPI_INT, 0, pco->comm() );
1517  if( ierr != MPI_SUCCESS ) return MB_FAILURE;
1518  // compute now displacements
1519  if( is_root_sender() )
1520  {
1521  displs[0] = 0;
1522  for( int k = 0; k < nSenders; k++ )
1523  {
1524  displs[k + 1] = displs[k] + counts[k];
1525  }
1526  }
1527  std::vector< int > buffer;
1528  if( is_root_sender() ) buffer.resize( displs[nSenders] ); // the last one will have the total count now
1529 
1530  std::vector< int > recvs;
1531  for( std::map< int, Range >::iterator mit = split_ranges.begin(); mit != split_ranges.end(); mit++ )
1532  {
1533  recvs.push_back( mit->first );
1534  }
1535  ierr =
1536  MPI_Gatherv( &recvs[0], numberReceivers, MPI_INT, &buffer[0], &counts[0], &displs[0], MPI_INT, 0, pco->comm() );
1537  if( ierr != MPI_SUCCESS ) return MB_FAILURE;
1538 
1539  // now form recv_graph map; points from the
1540  // now form the graph to be sent to the other side; only on root
1541  if( is_root_sender() )
1542  {
1543 #ifdef GRAPH_INFO
1544  std::ofstream dbfileSender;
1545  std::stringstream outf;
1546  outf << "S_" << compid1 << "_R_" << compid2 << "_SenderGraph.txt";
1547  dbfileSender.open( outf.str().c_str() );
1548  dbfileSender << " number senders: " << nSenders << "\n";
1549  dbfileSender << " senderRank \treceivers \n";
1550  for( int k = 0; k < nSenders; k++ )
1551  {
1552  int indexInBuff = displs[k];
1553  int senderTask = senderTasks[k];
1554  dbfileSender << senderTask << "\t\t";
1555  for( int j = 0; j < counts[k]; j++ )
1556  {
1557  int recvTask = buffer[indexInBuff + j];
1558  dbfileSender << recvTask << " ";
1559  }
1560  dbfileSender << "\n";
1561  }
1562  dbfileSender.close();
1563 #endif
1564  for( int k = 0; k < nSenders; k++ )
1565  {
1566  int indexInBuff = displs[k];
1567  int senderTask = senderTasks[k];
1568  for( int j = 0; j < counts[k]; j++ )
1569  {
1570  int recvTask = buffer[indexInBuff + j];
1571  recv_graph[recvTask].push_back( senderTask ); // this will be packed and sent to root receiver, with
1572  // nonblocking send
1573  }
1574  }
1575 #ifdef GRAPH_INFO
1576  std::ofstream dbfile;
1577  std::stringstream outf2;
1578  outf2 << "S_" << compid1 << "_R_" << compid2 << "_RecvGraph.txt";
1579  dbfile.open( outf2.str().c_str() );
1580  dbfile << " number receivers: " << recv_graph.size() << "\n";
1581  dbfile << " receiverRank \tsenders \n";
1582  for( std::map< int, std::vector< int > >::iterator mit = recv_graph.begin(); mit != recv_graph.end(); mit++ )
1583  {
1584  int recvTask = mit->first;
1585  std::vector< int >& senders = mit->second;
1586  dbfile << recvTask << "\t\t";
1587  for( std::vector< int >::iterator vit = senders.begin(); vit != senders.end(); vit++ )
1588  dbfile << *vit << " ";
1589  dbfile << "\n";
1590  }
1591  dbfile.close();
1592 #endif
1593  // this is the same as trivial partition
1594  ErrorCode rval = send_graph( jcomm );MB_CHK_ERR( rval );
1595  }
1596 
1597  return MB_SUCCESS;
1598 }

References buffer, moab::ParallelComm::comm(), compid1, compid2, ErrorCode, is_root_sender(), MB_CHK_ERR, MB_SUCCESS, recv_graph, send_graph(), senders(), senderTasks, and split_ranges.

◆ send_mesh_parts()

ErrorCode moab::ParCommGraph::send_mesh_parts ( MPI_Comm  jcomm,
ParallelComm pco,
Range owned 
)

not anymore !

Definition at line 299 of file ParCommGraph.cpp.

300 {
301 
302  ErrorCode rval;
303  if( split_ranges.empty() ) // in trivial partition
304  {
305  rval = split_owned_range( rankInGroup1, owned );
306  if( rval != MB_SUCCESS ) return rval;
307  // we know this on the sender side:
309  corr_sizes = sender_sizes[senderTasks[rankInGroup1]]; // another copy
310  }
311 
312  int indexReq = 0;
313  int ierr; // MPI error
314  if( is_root_sender() ) indexReq = 1; // for sendReqs
315  sendReqs.resize( indexReq + split_ranges.size() );
316  for( std::map< int, Range >::iterator it = split_ranges.begin(); it != split_ranges.end(); it++ )
317  {
318  int receiver_proc = it->first;
319  Range ents = it->second;
320 
321  // add necessary vertices too
322  Range verts;
323  rval = pco->get_moab()->get_adjacencies( ents, 0, false, verts, Interface::UNION );
324  if( rval != MB_SUCCESS )
325  {
326  std::cout << " can't get adjacencies. for entities to send\n";
327  return rval;
328  }
329  ents.merge( verts );
330  ParallelComm::Buffer* buffer = new ParallelComm::Buffer( ParallelComm::INITIAL_BUFF_SIZE );
331  buffer->reset_ptr( sizeof( int ) );
332  rval = pco->pack_buffer( ents, false, true, false, -1, buffer );
333  if( rval != MB_SUCCESS )
334  {
335  std::cout << " can't pack buffer for entities to send\n";
336  return rval;
337  }
338  int size_pack = buffer->get_current_size();
339 
340  // TODO there could be an issue with endian things; check !!!!!
341  // we are sending the size of the buffer first as an int!!!
342  /// not anymore !
343  /* ierr = MPI_Isend(buffer->mem_ptr, 1, MPI_INT, receiver_proc, 1, jcomm,
344  &sendReqs[indexReq]); // we have to use global communicator if (ierr!=0) return MB_FAILURE;
345  indexReq++;*/
346 
347  ierr = MPI_Isend( buffer->mem_ptr, size_pack, MPI_CHAR, receiver_proc, 2, jcomm,
348  &sendReqs[indexReq] ); // we have to use global communicator
349  if( ierr != 0 ) return MB_FAILURE;
350  indexReq++;
351  localSendBuffs.push_back( buffer );
352  }
353  return MB_SUCCESS;
354 }

References buffer, corr_sizes, corr_tasks, ErrorCode, moab::Interface::get_adjacencies(), moab::ParallelComm::get_moab(), moab::ParallelComm::INITIAL_BUFF_SIZE, is_root_sender(), localSendBuffs, MB_SUCCESS, moab::Range::merge(), moab::ParallelComm::pack_buffer(), rankInGroup1, sender_graph, sender_sizes, senderTasks, sendReqs, split_owned_range(), split_ranges, and moab::Interface::UNION.

◆ send_tag_values()

ErrorCode moab::ParCommGraph::send_tag_values ( MPI_Comm  jcomm,
ParallelComm pco,
Range owned,
std::vector< Tag > &  tag_handles 
)

Get the size of the specified tag in bytes

Definition at line 577 of file ParCommGraph.cpp.

581 {
582  // basically, owned.size() needs to be equal to sum(corr_sizes)
583  // get info about the tag size, type, etc
584  int ierr;
585  Core* mb = (Core*)pco->get_moab();
586  // get info about the tag
587  //! Get the size of the specified tag in bytes
588  int total_bytes_per_entity = 0; // we need to know, to allocate buffers
589  ErrorCode rval;
590  std::vector< int > vect_bytes_per_tag;
591 #ifdef VERBOSE
592  std::vector< int > tag_sizes;
593 #endif
594  for( size_t i = 0; i < tag_handles.size(); i++ )
595  {
596  int bytes_per_tag;
597  rval = mb->tag_get_bytes( tag_handles[i], bytes_per_tag );MB_CHK_ERR( rval );
598  int tag_size1; // length
599  rval = mb->tag_get_length( tag_handles[i], tag_size1 );MB_CHK_ERR( rval );
600  if( graph_type == DOF_BASED )
601  bytes_per_tag = bytes_per_tag / tag_size1; // we know we have one double per tag , per ID sent;
602  // could be 8 for double, 4 for int, etc
603  total_bytes_per_entity += bytes_per_tag;
604  vect_bytes_per_tag.push_back( bytes_per_tag );
605 #ifdef VERBOSE
606  int tag_size;
607  rval = mb->tag_get_length( tag_handles[i], tag_size );MB_CHK_ERR( rval );
608  tag_sizes.push_back( tag_size );
609 #endif
610  }
611 
612  int indexReq = 0;
613  if( graph_type == INITIAL_MIGRATE ) // original send
614  {
615  // use the buffers data structure to allocate memory for sending the tags
616  sendReqs.resize( split_ranges.size() );
617 
618  for( std::map< int, Range >::iterator it = split_ranges.begin(); it != split_ranges.end(); it++ )
619  {
620  int receiver_proc = it->first;
621  Range ents = it->second; // primary entities, with the tag data
622  int size_buffer = 4 + total_bytes_per_entity *
623  (int)ents.size(); // hopefully, below 2B; if more, we have a big problem ...
624  ParallelComm::Buffer* buffer = new ParallelComm::Buffer( size_buffer );
625 
626  buffer->reset_ptr( sizeof( int ) );
627  for( size_t i = 0; i < tag_handles.size(); i++ )
628  {
629  // copy tag data to buffer->buff_ptr, and send the buffer (we could have used
630  // regular char arrays)
631  rval = mb->tag_get_data( tag_handles[i], ents, (void*)( buffer->buff_ptr ) );MB_CHK_ERR( rval );
632  // advance the butter
633  buffer->buff_ptr += vect_bytes_per_tag[i] * ents.size();
634  }
635  *( (int*)buffer->mem_ptr ) = size_buffer;
636  // int size_pack = buffer->get_current_size(); // debug check
637  ierr = MPI_Isend( buffer->mem_ptr, size_buffer, MPI_CHAR, receiver_proc, 222, jcomm,
638  &sendReqs[indexReq] ); // we have to use global communicator
639  if( ierr != 0 ) return MB_FAILURE;
640  indexReq++;
641  localSendBuffs.push_back( buffer ); // we will release them after nonblocking sends are completed
642  }
643  }
644  else if( graph_type == COVERAGE )
645  {
646  // we know that we will need to send some tag data in a specific order
647  // first, get the ids of the local elements, from owned Range; arrange the buffer in order
648  // of increasing global id
649  Tag gidTag = mb->globalId_tag();
650  std::vector< int > gids;
651  gids.resize( owned.size() );
652  rval = mb->tag_get_data( gidTag, owned, &gids[0] );MB_CHK_ERR( rval );
653  std::map< int, EntityHandle > gidToHandle;
654  size_t i = 0;
655  for( Range::iterator it = owned.begin(); it != owned.end(); it++ )
656  {
657  EntityHandle eh = *it;
658  gidToHandle[gids[i++]] = eh;
659  }
660  // now, pack the data and send it
661  sendReqs.resize( involved_IDs_map.size() );
662  for( std::map< int, std::vector< int > >::iterator mit = involved_IDs_map.begin();
663  mit != involved_IDs_map.end(); mit++ )
664  {
665  int receiver_proc = mit->first;
666  std::vector< int >& eids = mit->second;
667  int size_buffer = 4 + total_bytes_per_entity *
668  (int)eids.size(); // hopefully, below 2B; if more, we have a big problem ...
669  ParallelComm::Buffer* buffer = new ParallelComm::Buffer( size_buffer );
670  buffer->reset_ptr( sizeof( int ) );
671 #ifdef VERBOSE
672  std::ofstream dbfile;
673  std::stringstream outf;
674  outf << "from_" << rankInJoin << "_send_to_" << receiver_proc << ".txt";
675  dbfile.open( outf.str().c_str() );
676  dbfile << "from " << rankInJoin << " send to " << receiver_proc << "\n";
677 #endif
678  // copy tag data to buffer->buff_ptr, and send the buffer (we could have used regular
679  // char arrays) pack data by tag, to be consistent with above, even though we loop
680  // through the entities for each tag
681 
682  for( std::vector< int >::iterator it = eids.begin(); it != eids.end(); it++ )
683  {
684  int eID = *it;
685  EntityHandle eh = gidToHandle[eID];
686  for( i = 0; i < tag_handles.size(); i++ )
687  {
688  rval = mb->tag_get_data( tag_handles[i], &eh, 1, (void*)( buffer->buff_ptr ) );
689  if( rval != MB_SUCCESS )
690  {
691  delete buffer; // free parallel comm buffer first
692 
693  MB_SET_ERR( rval, "Tag get data failed" );
694  }
695 #ifdef VERBOSE
696  dbfile << "global ID " << eID << " local handle " << mb->id_from_handle( eh ) << " vals: ";
697  double* vals = (double*)( buffer->buff_ptr );
698  for( int kk = 0; kk < tag_sizes[i]; kk++ )
699  {
700  dbfile << " " << *vals;
701  vals++;
702  }
703  dbfile << "\n";
704 #endif
705  buffer->buff_ptr += vect_bytes_per_tag[i];
706  }
707  }
708 
709 #ifdef VERBOSE
710  dbfile.close();
711 #endif
712  *( (int*)buffer->mem_ptr ) = size_buffer;
713  // int size_pack = buffer->get_current_size(); // debug check
714  ierr = MPI_Isend( buffer->mem_ptr, size_buffer, MPI_CHAR, receiver_proc, 222, jcomm,
715  &sendReqs[indexReq] ); // we have to use global communicator
716  if( ierr != 0 ) return MB_FAILURE;
717  indexReq++;
718  localSendBuffs.push_back( buffer ); // we will release them after nonblocking sends are completed
719  }
720  }
721  else if( graph_type == DOF_BASED )
722  {
723  // need to fill up the buffer, in the order desired, send it
724  // get all the tags, for all owned entities, and pack the buffers accordingly
725  // we do not want to get the tags by entity, it may be too expensive
726  std::vector< std::vector< double > > valuesTags;
727  valuesTags.resize( tag_handles.size() );
728  for( size_t i = 0; i < tag_handles.size(); i++ )
729  {
730 
731  int bytes_per_tag;
732  rval = mb->tag_get_bytes( tag_handles[i], bytes_per_tag );MB_CHK_ERR( rval );
733  valuesTags[i].resize( owned.size() * bytes_per_tag / sizeof( double ) );
734  // fill the whole array, we will pick up from here
735  rval = mb->tag_get_data( tag_handles[i], owned, (void*)( &( valuesTags[i][0] ) ) );MB_CHK_ERR( rval );
736  }
737  // now, pack the data and send it
738  sendReqs.resize( involved_IDs_map.size() );
739  for( std::map< int, std::vector< int > >::iterator mit = involved_IDs_map.begin();
740  mit != involved_IDs_map.end(); mit++ )
741  {
742  int receiver_proc = mit->first;
743  std::vector< int >& eids = mit->second;
744  std::vector< int >& index_in_values = map_index[receiver_proc];
745  std::vector< int >& index_ptr = map_ptr[receiver_proc]; // this is eids.size()+1;
746  int size_buffer = 4 + total_bytes_per_entity *
747  (int)eids.size(); // hopefully, below 2B; if more, we have a big problem ...
748  ParallelComm::Buffer* buffer = new ParallelComm::Buffer( size_buffer );
749  buffer->reset_ptr( sizeof( int ) );
750 #ifdef VERBOSE
751  std::ofstream dbfile;
752  std::stringstream outf;
753  outf << "from_" << rankInJoin << "_send_to_" << receiver_proc << ".txt";
754  dbfile.open( outf.str().c_str() );
755  dbfile << "from " << rankInJoin << " send to " << receiver_proc << "\n";
756 #endif
757  // copy tag data to buffer->buff_ptr, and send the buffer
758  // pack data by tag, to be consistent with above
759  int j = 0;
760  for( std::vector< int >::iterator it = eids.begin(); it != eids.end(); it++, j++ )
761  {
762  int index_in_v = index_in_values[index_ptr[j]];
763  for( size_t i = 0; i < tag_handles.size(); i++ )
764  {
765  // right now, move just doubles; but it could be any type of tag
766  *( (double*)( buffer->buff_ptr ) ) = valuesTags[i][index_in_v];
767  buffer->buff_ptr += 8; // we know we are working with doubles only !!!
768  }
769  };
770  *( (int*)buffer->mem_ptr ) = size_buffer;
771  // int size_pack = buffer->get_current_size(); // debug check
772  ierr = MPI_Isend( buffer->mem_ptr, size_buffer, MPI_CHAR, receiver_proc, 222, jcomm,
773  &sendReqs[indexReq] ); // we have to use global communicator
774  if( ierr != 0 ) return MB_FAILURE;
775  indexReq++;
776  localSendBuffs.push_back( buffer ); // we will release them after nonblocking sends are completed
777  }
778  }
779  return MB_SUCCESS;
780 }

References moab::Range::begin(), buffer, COVERAGE, DOF_BASED, moab::Range::end(), ErrorCode, moab::ParallelComm::get_moab(), graph_type, INITIAL_MIGRATE, involved_IDs_map, localSendBuffs, map_index, map_ptr, mb, MB_CHK_ERR, MB_SET_ERR, MB_SUCCESS, rankInJoin, sendReqs, moab::Range::size(), and split_ranges.

◆ sender()

int moab::ParCommGraph::sender ( int  index)
inline

Definition at line 143 of file ParCommGraph.hpp.

144  {
145  return senderTasks[index];
146  }

References senderTasks.

Referenced by receive_comm_graph().

◆ senders()

const std::vector< int >& moab::ParCommGraph::senders ( )
inline

Definition at line 206 of file ParCommGraph.hpp.

207  {
208  return senderTasks;
209  } // reference copy; refers to sender tasks in joint comm

References senderTasks.

Referenced by pack_receivers_graph(), and send_graph_partition().

◆ set_context_id()

void moab::ParCommGraph::set_context_id ( int  other_id)
inline

Definition at line 166 of file ParCommGraph.hpp.

167  {
168  context_id = other_id;
169  }

References context_id.

◆ set_cover_set()

void moab::ParCommGraph::set_cover_set ( EntityHandle  cover)
inline

Definition at line 175 of file ParCommGraph.hpp.

176  {
177  cover_set = cover;
178  }

References cover_set.

◆ set_split_ranges()

ErrorCode moab::ParCommGraph::set_split_ranges ( int  comp,
TupleList TLBackToComp1,
std::vector< int > &  valuesComp1,
int  lenTag,
Range ents_of_interest,
int  type 
)

Definition at line 1259 of file ParCommGraph.cpp.

1265 {
1266  // settle split_ranges // same role as partitioning
1267  if( rootSender ) std::cout << " find split_ranges on component " << comp << " according to read map \n";
1268  // Vector to store element
1269  // with respective present index
1270  int n = TLBackToComp1.get_n();
1271  // third_method = true; // do not rely only on involved_IDs_map.size(); this can be 0 in some
1272  // cases
1273  std::map< int, std::set< int > > uniqueIDs;
1274 
1275  for( int i = 0; i < n; i++ )
1276  {
1277  int to_proc = TLBackToComp1.vi_wr[3 * i + 2];
1278  int globalId = TLBackToComp1.vi_wr[3 * i + 1];
1279  uniqueIDs[to_proc].insert( globalId );
1280  }
1281 
1282  for( int i = 0; i < (int)ents_of_interest.size(); i++ )
1283  {
1284  EntityHandle ent = ents_of_interest[i];
1285  for( int j = 0; j < lenTag; j++ )
1286  {
1287  int marker = valuesComp1[i * lenTag + j];
1288  for( auto mit = uniqueIDs.begin(); mit != uniqueIDs.end(); mit++ )
1289  {
1290  int proc = mit->first;
1291  std::set< int >& setIds = mit->second;
1292  if( setIds.find( marker ) != setIds.end() )
1293  {
1294  split_ranges[proc].insert( ent );
1295  }
1296  }
1297  }
1298  }
1299 
1300  return MB_SUCCESS;
1301 }

References moab::TupleList::get_n(), MB_SUCCESS, rootSender, moab::Range::size(), split_ranges, and moab::TupleList::vi_wr.

◆ SetReceivingAfterCoverage()

void moab::ParCommGraph::SetReceivingAfterCoverage ( std::map< int, std::set< int > > &  idsFromProcs)

Definition at line 1017 of file ParCommGraph.cpp.

1019 {
1020  for( std::map< int, std::set< int > >::iterator mt = idsFromProcs.begin(); mt != idsFromProcs.end(); mt++ )
1021  {
1022  int fromProc = mt->first;
1023  std::set< int >& setIds = mt->second;
1024  involved_IDs_map[fromProc].resize( setIds.size() );
1025  std::vector< int >& listIDs = involved_IDs_map[fromProc];
1026  size_t indx = 0;
1027  for( std::set< int >::iterator st = setIds.begin(); st != setIds.end(); st++ )
1028  {
1029  int valueID = *st;
1030  listIDs[indx++] = valueID;
1031  }
1032  }
1033  graph_type = COVERAGE;
1034  return;
1035 }

References COVERAGE, graph_type, and involved_IDs_map.

◆ settle_comm_by_ids()

void moab::ParCommGraph::settle_comm_by_ids ( int  comp,
TupleList TLBackToComp,
std::vector< int > &  valuesComp 
)

Definition at line 1037 of file ParCommGraph.cpp.

1038 {
1039  // settle comm graph on comp
1040  if( rootSender || rootReceiver ) std::cout << " settle comm graph by id on component " << comp << "\n";
1041  int n = TLBackToComp.get_n();
1042  // third_method = true; // do not rely only on involved_IDs_map.size(); this can be 0 in some
1043  // cases
1044  std::map< int, std::set< int > > uniqueIDs;
1045 
1046  for( int i = 0; i < n; i++ )
1047  {
1048  int to_proc = TLBackToComp.vi_wr[3 * i + 2];
1049  int globalId = TLBackToComp.vi_wr[3 * i + 1];
1050  uniqueIDs[to_proc].insert( globalId );
1051  }
1052 
1053  // Vector to store element
1054  // with respective present index
1055  std::vector< std::pair< int, int > > vp;
1056  vp.reserve( valuesComp.size() );
1057 
1058  // Inserting element in pair vector
1059  // to keep track of previous indexes in valuesComp
1060  for( int i = 0; i < (int)valuesComp.size(); ++i )
1061  {
1062  vp.push_back( std::make_pair( valuesComp[i], i ) );
1063  }
1064  // Sorting pair vector
1065  sort( vp.begin(), vp.end() );
1066 
1067  // vp[i].first, second
1068 
1069  // count now how many times some value appears in ordered (so in valuesComp)
1070  for( std::map< int, std::set< int > >::iterator it = uniqueIDs.begin(); it != uniqueIDs.end(); it++ )
1071  {
1072  int procId = it->first;
1073  std::set< int >& nums = it->second;
1074  std::vector< int >& indx = map_ptr[procId];
1075  std::vector< int >& indices = map_index[procId];
1076  indx.resize( nums.size() + 1 );
1077  int indexInVp = 0;
1078  int indexVal = 0;
1079  indx[0] = 0; // start from 0
1080  for( std::set< int >::iterator sst = nums.begin(); sst != nums.end(); sst++, indexVal++ )
1081  {
1082  int val = *sst;
1083  involved_IDs_map[procId].push_back( val );
1084  indx[indexVal + 1] = indx[indexVal];
1085  while( ( indexInVp < (int)valuesComp.size() ) && ( vp[indexInVp].first <= val ) ) // should be equal !
1086  {
1087  if( vp[indexInVp].first == val )
1088  {
1089  indx[indexVal + 1]++;
1090  indices.push_back( vp[indexInVp].second );
1091  }
1092  indexInVp++;
1093  }
1094  }
1095  }
1096 #ifdef VERBOSE
1097  std::stringstream f1;
1098  std::ofstream dbfile;
1099  f1 << "Involve_" << comp << "_" << rankInJoin << ".txt";
1100  dbfile.open( f1.str().c_str() );
1101  for( std::map< int, std::vector< int > >::iterator mit = involved_IDs_map.begin(); mit != involved_IDs_map.end();
1102  mit++ )
1103  {
1104  int corrTask = mit->first;
1105  std::vector< int >& corrIds = mit->second;
1106  std::vector< int >& indx = map_ptr[corrTask];
1107  std::vector< int >& indices = map_index[corrTask];
1108 
1109  dbfile << " towards proc " << corrTask << " \n";
1110  for( int i = 0; i < (int)corrIds.size(); i++ )
1111  {
1112  dbfile << corrIds[i] << " [" << indx[i] << "," << indx[i + 1] << ") : ";
1113  for( int j = indx[i]; j < indx[i + 1]; j++ )
1114  dbfile << indices[j] << " ";
1115  dbfile << "\n";
1116  }
1117  dbfile << " \n";
1118  }
1119  dbfile.close();
1120 #endif
1121 
1123  // now we need to fill back and forth information, needed to fill the arrays
1124  // for example, from spectral to involved_IDs_map, in case we want to send data from
1125  // spectral to phys
1126 }

References DOF_BASED, moab::GeomUtil::first(), moab::TupleList::get_n(), graph_type, involved_IDs_map, map_index, map_ptr, rankInJoin, rootReceiver, rootSender, and moab::TupleList::vi_wr.

◆ settle_send_graph()

ErrorCode moab::ParCommGraph::settle_send_graph ( TupleList TLcovIDs)

Definition at line 989 of file ParCommGraph.cpp.

990 {
991  // fill involved_IDs_map with data
992  // will have "receiving proc" and global id of element
993  int n = TLcovIDs.get_n();
994  graph_type = COVERAGE; // do not rely only on involved_IDs_map.size(); this can be 0 in some cases
995  for( int i = 0; i < n; i++ )
996  {
997  int to_proc = TLcovIDs.vi_wr[2 * i];
998  int globalIdElem = TLcovIDs.vi_wr[2 * i + 1];
999  involved_IDs_map[to_proc].push_back( globalIdElem );
1000  }
1001 #ifdef VERBOSE
1002  for( std::map< int, std::vector< int > >::iterator mit = involved_IDs_map.begin(); mit != involved_IDs_map.end();
1003  mit++ )
1004  {
1005  std::cout << " towards task " << mit->first << " send: " << mit->second.size() << " cells " << std::endl;
1006  for( size_t i = 0; i < mit->second.size(); i++ )
1007  {
1008  std::cout << " " << mit->second[i];
1009  }
1010  std::cout << std::endl;
1011  }
1012 #endif
1013  return MB_SUCCESS;
1014 }

References COVERAGE, moab::TupleList::get_n(), graph_type, involved_IDs_map, MB_SUCCESS, and moab::TupleList::vi_wr.

◆ split_owned_range() [1/2]

ErrorCode moab::ParCommGraph::split_owned_range ( int  sender_rank,
Range owned 
)

Definition at line 217 of file ParCommGraph.cpp.

218 {
219  int senderTask = senderTasks[sender_rank];
220  std::vector< int >& distribution = sender_sizes[senderTask];
221  std::vector< int >& receivers = sender_graph[senderTask];
222  if( distribution.size() != receivers.size() ) //
223  return MB_FAILURE;
224 
225  Range current = owned; // get the full range first, then we will subtract stuff, for
226  // the following ranges
227 
228  Range rleftover = current;
229  for( size_t k = 0; k < receivers.size(); k++ )
230  {
231  Range newr;
232  newr.insert( current.begin(), current.begin() + distribution[k] );
233  split_ranges[receivers[k]] = newr;
234 
235  rleftover = subtract( current, newr );
236  current = rleftover;
237  }
238 
239  return MB_SUCCESS;
240 }

References moab::Range::begin(), moab::Range::insert(), MB_SUCCESS, receivers(), sender_graph, sender_sizes, senderTasks, split_ranges, and moab::subtract().

Referenced by send_mesh_parts().

◆ split_owned_range() [2/2]

ErrorCode moab::ParCommGraph::split_owned_range ( Range owned)

Definition at line 243 of file ParCommGraph.cpp.

244 {
245  if( corr_tasks.size() != corr_sizes.size() ) //
246  return MB_FAILURE;
247 
248  Range current = owned; // get the full range first, then we will subtract stuff, for
249  // the following ranges
250 
251  Range rleftover = current;
252  for( size_t k = 0; k < corr_tasks.size(); k++ )
253  {
254  Range newr;
255  newr.insert( current.begin(), current.begin() + corr_sizes[k] );
256  split_ranges[corr_tasks[k]] = newr;
257 
258  rleftover = subtract( current, newr );
259  current = rleftover;
260  }
261 
262  return MB_SUCCESS;
263 }

References moab::Range::begin(), corr_sizes, corr_tasks, moab::Range::insert(), MB_SUCCESS, split_ranges, and moab::subtract().

Member Data Documentation

◆ comm

MPI_Comm moab::ParCommGraph::comm
private

Definition at line 263 of file ParCommGraph.hpp.

Referenced by ParCommGraph().

◆ comm_graph

int* moab::ParCommGraph::comm_graph
private

Definition at line 288 of file ParCommGraph.hpp.

Referenced by ParCommGraph(), release_send_buffers(), and send_graph().

◆ compid1

int moab::ParCommGraph::compid1
private

Definition at line 270 of file ParCommGraph.hpp.

Referenced by get_component_id1(), ParCommGraph(), and send_graph_partition().

◆ compid2

int moab::ParCommGraph::compid2
private

Definition at line 270 of file ParCommGraph.hpp.

Referenced by get_component_id2(), ParCommGraph(), and send_graph_partition().

◆ context_id

int moab::ParCommGraph::context_id
private

Definition at line 271 of file ParCommGraph.hpp.

Referenced by get_context_id(), ParCommGraph(), and set_context_id().

◆ corr_sizes

std::vector< int > moab::ParCommGraph::corr_sizes
private

Definition at line 302 of file ParCommGraph.hpp.

Referenced by receive_mesh(), send_mesh_parts(), and split_owned_range().

◆ corr_tasks

std::vector< int > moab::ParCommGraph::corr_tasks
private

Definition at line 300 of file ParCommGraph.hpp.

Referenced by receive_mesh(), send_mesh_parts(), and split_owned_range().

◆ cover_set

EntityHandle moab::ParCommGraph::cover_set
private

Definition at line 272 of file ParCommGraph.hpp.

Referenced by get_cover_set(), ParCommGraph(), and set_cover_set().

◆ graph_type

◆ involved_IDs_map

std::map< int, std::vector< int > > moab::ParCommGraph::involved_IDs_map
private

◆ joinSize

int moab::ParCommGraph::joinSize
private

Definition at line 269 of file ParCommGraph.hpp.

Referenced by ParCommGraph().

◆ localSendBuffs

std::vector< ParallelComm::Buffer* > moab::ParCommGraph::localSendBuffs
private

Definition at line 285 of file ParCommGraph.hpp.

Referenced by release_send_buffers(), send_mesh_parts(), and send_tag_values().

◆ map_index

std::map< int, std::vector< int > > moab::ParCommGraph::map_index
private

Definition at line 312 of file ParCommGraph.hpp.

Referenced by receive_tag_values(), send_tag_values(), and settle_comm_by_ids().

◆ map_ptr

std::map< int, std::vector< int > > moab::ParCommGraph::map_ptr
private

Definition at line 313 of file ParCommGraph.hpp.

Referenced by receive_tag_values(), send_tag_values(), and settle_comm_by_ids().

◆ rankInGroup1

int moab::ParCommGraph::rankInGroup1
private

Definition at line 268 of file ParCommGraph.hpp.

Referenced by dump_comm_information(), ParCommGraph(), and send_mesh_parts().

◆ rankInGroup2

int moab::ParCommGraph::rankInGroup2
private

Definition at line 268 of file ParCommGraph.hpp.

Referenced by dump_comm_information(), and ParCommGraph().

◆ rankInJoin

int moab::ParCommGraph::rankInJoin
private

◆ receiverTasks

std::vector< int > moab::ParCommGraph::receiverTasks
private

◆ recv_graph

std::map< int, std::vector< int > > moab::ParCommGraph::recv_graph
private

◆ recv_sizes

std::map< int, std::vector< int > > moab::ParCommGraph::recv_sizes
private

Definition at line 279 of file ParCommGraph.hpp.

Referenced by compute_trivial_partition().

◆ rootReceiver

bool moab::ParCommGraph::rootReceiver
private

◆ rootSender

bool moab::ParCommGraph::rootSender
private

◆ sender_graph

std::map< int, std::vector< int > > moab::ParCommGraph::sender_graph
private

Definition at line 281 of file ParCommGraph.hpp.

Referenced by compute_trivial_partition(), send_mesh_parts(), and split_owned_range().

◆ sender_sizes

std::map< int, std::vector< int > > moab::ParCommGraph::sender_sizes
private

Definition at line 283 of file ParCommGraph.hpp.

Referenced by compute_trivial_partition(), send_mesh_parts(), and split_owned_range().

◆ senderTasks

std::vector< int > moab::ParCommGraph::senderTasks
private

◆ sendReqs

std::vector< MPI_Request > moab::ParCommGraph::sendReqs
private

◆ split_ranges


The documentation for this class was generated from the following files: