Mesh Oriented datABase  (version 5.5.1)
An array-based unstructured mesh library
moab::ParallelComm Class Reference

Parallel communications in MOAB. More...

#include <ParallelComm.hpp>

+ Collaboration diagram for moab::ParallelComm:

Classes

class  Buffer
 
struct  SharedEntityData
 

Public Member Functions

 ParallelComm (Interface *impl, MPI_Comm comm, int *pcomm_id_out=0)
 constructor More...
 
 ParallelComm (Interface *impl, std::vector< unsigned char > &tmp_buff, MPI_Comm comm, int *pcomm_id_out=0)
 constructor taking packed buffer, for testing More...
 
int get_id () const
 Get ID used to reference this PCOMM instance. More...
 
 ~ParallelComm ()
 destructor More...
 
ErrorCode assign_global_ids (EntityHandle this_set, const int dimension, const int start_id=1, const bool largest_dim_only=true, const bool parallel=true, const bool owned_only=false)
 assign a global id space, for largest-dimension or all entities (and in either case for vertices too) More...
 
ErrorCode assign_global_ids (Range entities[], const int dimension, const int start_id, const bool parallel, const bool owned_only)
 assign a global id space, for largest-dimension or all entities (and in either case for vertices too) More...
 
ErrorCode check_global_ids (EntityHandle this_set, const int dimension, const int start_id=1, const bool largest_dim_only=true, const bool parallel=true, const bool owned_only=false)
 check for global ids; based only on tag handle being there or not; if it's not there, create them for the specified dimensions More...
 
ErrorCode send_entities (const int to_proc, Range &orig_ents, const bool adjs, const bool tags, const bool store_remote_handles, const bool is_iface, Range &final_ents, int &incoming1, int &incoming2, TupleList &entprocs, std::vector< MPI_Request > &recv_remoteh_reqs, bool wait_all=true)
 send entities to another processor, optionally waiting until it's done More...
 
ErrorCode send_entities (std::vector< unsigned int > &send_procs, std::vector< Range * > &send_ents, int &incoming1, int &incoming2, const bool store_remote_handles)
 
ErrorCode recv_entities (const int from_proc, const bool store_remote_handles, const bool is_iface, Range &final_ents, int &incomming1, int &incoming2, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< MPI_Request > &recv_remoteh_reqs, bool wait_all=true)
 Receive entities from another processor, optionally waiting until it's done. More...
 
ErrorCode recv_entities (std::set< unsigned int > &recv_procs, int incoming1, int incoming2, const bool store_remote_handles, const bool migrate=false)
 
ErrorCode recv_messages (const int from_proc, const bool store_remote_handles, const bool is_iface, Range &final_ents, int &incoming1, int &incoming2, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< MPI_Request > &recv_remoteh_reqs)
 Receive messages from another processor in while loop. More...
 
ErrorCode recv_remote_handle_messages (const int from_proc, int &incoming2, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< MPI_Request > &recv_remoteh_reqs)
 
ErrorCode exchange_ghost_cells (int ghost_dim, int bridge_dim, int num_layers, int addl_ents, bool store_remote_handles, bool wait_all=true, EntityHandle *file_set=NULL)
 Exchange ghost cells with neighboring procs Neighboring processors are those sharing an interface with this processor. All entities of dimension ghost_dim within num_layers of interface, measured going through bridge_dim, are exchanged. See MeshTopoUtil::get_bridge_adjacencies for description of bridge adjacencies. If wait_all is false and store_remote_handles is true, MPI_Request objects are available in the sendReqs[2*MAX_SHARING_PROCS] member array, with inactive requests marked as MPI_REQUEST_NULL. If store_remote_handles or wait_all is false, this function returns after all entities have been received and processed. More...
 
ErrorCode post_irecv (std::vector< unsigned int > &exchange_procs)
 Post "MPI_Irecv" before meshing. More...
 
ErrorCode post_irecv (std::vector< unsigned int > &shared_procs, std::set< unsigned int > &recv_procs)
 
ErrorCode exchange_owned_meshs (std::vector< unsigned int > &exchange_procs, std::vector< Range * > &exchange_ents, std::vector< MPI_Request > &recv_ent_reqs, std::vector< MPI_Request > &recv_remoteh_reqs, bool store_remote_handles, bool wait_all=true, bool migrate=false, int dim=0)
 Exchange owned mesh for input mesh entities and sets This function should be called collectively over the communicator for this ParallelComm. If this version is called, all shared exchanged entities should have a value for this tag (or the tag should have a default value). More...
 
ErrorCode exchange_owned_mesh (std::vector< unsigned int > &exchange_procs, std::vector< Range * > &exchange_ents, std::vector< MPI_Request > &recv_ent_reqs, std::vector< MPI_Request > &recv_remoteh_reqs, const bool recv_posted, bool store_remote_handles, bool wait_all, bool migrate=false)
 Exchange owned mesh for input mesh entities and sets This function is called twice by exchange_owned_meshs to exchange entities before sets. More...
 
ErrorCode exchange_tags (const std::vector< Tag > &src_tags, const std::vector< Tag > &dst_tags, const Range &entities)
 Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If this version is called, all ghosted/shared entities should have a value for this tag (or the tag should have a default value). If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective. More...
 
ErrorCode exchange_tags (const char *tag_name, const Range &entities)
 Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective. More...
 
ErrorCode exchange_tags (Tag tagh, const Range &entities)
 Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective. More...
 
ErrorCode reduce_tags (const std::vector< Tag > &src_tags, const std::vector< Tag > &dst_tags, const MPI_Op mpi_op, const Range &entities)
 Perform data reduction operation for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If this version is called, all ghosted/shared entities should have a value for this tag (or the tag should have a default value). Operation is any MPI_Op, with result stored in destination tag. More...
 
ErrorCode reduce_tags (const char *tag_name, const MPI_Op mpi_op, const Range &entities)
 Perform data reduction operation for all shared and ghosted entities Same as std::vector variant except for one tag specified by name. More...
 
ErrorCode reduce_tags (Tag tag_handle, const MPI_Op mpi_op, const Range &entities)
 Perform data reduction operation for all shared and ghosted entities Same as std::vector variant except for one tag specified by handle. More...
 
ErrorCode broadcast_entities (const int from_proc, Range &entities, const bool adjacencies=false, const bool tags=true)
 Broadcast all entities resident on from_proc to other processors This function assumes remote handles are not being stored, since (usually) every processor will know about the whole mesh. More...
 
ErrorCode scatter_entities (const int from_proc, std::vector< Range > &entities, const bool adjacencies=false, const bool tags=true)
 Scatter entities on from_proc to other processors This function assumes remote handles are not being stored, since (usually) every processor will know about the whole mesh. More...
 
ErrorCode send_recv_entities (std::vector< int > &send_procs, std::vector< std::vector< int > > &msgsizes, std::vector< std::vector< EntityHandle > > &senddata, std::vector< std::vector< EntityHandle > > &recvdata)
 Send and receives data from a set of processors. More...
 
ErrorCode update_remote_data (EntityHandle entity, std::vector< int > &procs, std::vector< EntityHandle > &handles)
 
ErrorCode get_remote_handles (EntityHandle *local_vec, EntityHandle *rem_vec, int num_ents, int to_proc)
 
ErrorCode resolve_shared_ents (EntityHandle this_set, Range &proc_ents, int resolve_dim=-1, int shared_dim=-1, Range *skin_ents=NULL, const Tag *id_tag=0)
 Resolve shared entities between processors. More...
 
ErrorCode resolve_shared_ents (EntityHandle this_set, int resolve_dim=3, int shared_dim=-1, const Tag *id_tag=0)
 Resolve shared entities between processors. More...
 
ErrorCode resolve_shared_sets (EntityHandle this_set, const Tag *id_tag=0)
 
ErrorCode resolve_shared_sets (Range &candidate_sets, Tag id_tag)
 
ErrorCode augment_default_sets_with_ghosts (EntityHandle file_set)
 
ErrorCode get_pstatus (EntityHandle entity, unsigned char &pstatus_val)
 Get parallel status of an entity Returns the parallel status of an entity. More...
 
ErrorCode get_pstatus_entities (int dim, unsigned char pstatus_val, Range &pstatus_ents)
 Get entities with the given pstatus bit(s) set Returns any entities whose pstatus tag value v satisfies (v & pstatus_val) More...
 
ErrorCode get_owner (EntityHandle entity, int &owner)
 Return the rank of the entity owner. More...
 
ErrorCode get_owner_handle (EntityHandle entity, int &owner, EntityHandle &handle)
 Return the owner processor and handle of a given entity. More...
 
ErrorCode get_sharing_data (const EntityHandle entity, int *ps, EntityHandle *hs, unsigned char &pstat, unsigned int &num_ps)
 Get the shared processors/handles for an entity Get the shared processors/handles for an entity. Arrays must be large enough to receive data for all sharing procs. Does not include this proc if only shared with one other proc. More...
 
ErrorCode get_sharing_data (const EntityHandle entity, int *ps, EntityHandle *hs, unsigned char &pstat, int &num_ps)
 Get the shared processors/handles for an entity Same as other version but with int num_ps. More...
 
ErrorCode get_sharing_data (const EntityHandle *entities, int num_entities, std::set< int > &procs, int op=Interface::INTERSECT)
 Get the intersection or union of all sharing processors Get the intersection or union of all sharing processors. Processor set is cleared as part of this function. More...
 
ErrorCode get_sharing_data (const Range &entities, std::set< int > &procs, int op=Interface::INTERSECT)
 Get the intersection or union of all sharing processors Same as previous variant but with range as input. More...
 
ErrorCode get_shared_entities (int other_proc, Range &shared_ents, int dim=-1, const bool iface=false, const bool owned_filter=false)
 Get shared entities of specified dimension If other_proc is -1, any shared entities are returned. If dim is -1, entities of all dimensions on interface are returned. More...
 
ErrorCode get_interface_procs (std::set< unsigned int > &iface_procs, const bool get_buffs=false)
 get processors with which this processor shares an interface More...
 
ErrorCode get_comm_procs (std::set< unsigned int > &procs)
 get processors with which this processor communicates More...
 
ErrorCode get_entityset_procs (EntityHandle entity_set, std::vector< unsigned > &ranks) const
 Get array of process IDs sharing a set. Returns zero and passes back NULL if set is not shared. More...
 
ErrorCode get_entityset_owner (EntityHandle entity_set, unsigned &owner_rank, EntityHandle *remote_handle=0) const
 Get rank of the owner of a shared set. Returns this proc if set is not shared. Optionally returns handle on owning process for shared set. More...
 
ErrorCode get_entityset_local_handle (unsigned owning_rank, EntityHandle remote_handle, EntityHandle &local_handle) const
 Given set owner and handle on owner, find local set handle. More...
 
ErrorCode get_shared_sets (Range &result) const
 Get all shared sets. More...
 
ErrorCode get_entityset_owners (std::vector< unsigned > &ranks) const
 Get ranks of all processes that own at least one set that is shared with this process. Will include the rank of this process if this process owns any shared set. More...
 
ErrorCode get_owned_sets (unsigned owning_rank, Range &sets_out) const
 Get shared sets owned by process with specified rank. More...
 
const ProcConfigproc_config () const
 Get proc config for this communication object. More...
 
ProcConfigproc_config ()
 Get proc config for this communication object. More...
 
unsigned rank () const
 
unsigned size () const
 
MPI_Comm comm () const
 
ErrorCode get_shared_proc_tags (Tag &sharedp_tag, Tag &sharedps_tag, Tag &sharedh_tag, Tag &sharedhs_tag, Tag &pstatus_tag)
 return the tags used to indicate shared procs and handles More...
 
Rangepartition_sets ()
 return partition, interface set ranges More...
 
const Rangepartition_sets () const
 
Rangeinterface_sets ()
 
const Rangeinterface_sets () const
 
Tag sharedp_tag ()
 return sharedp tag More...
 
Tag sharedps_tag ()
 return sharedps tag More...
 
Tag sharedh_tag ()
 return sharedh tag More...
 
Tag sharedhs_tag ()
 return sharedhs tag More...
 
Tag pstatus_tag ()
 return pstatus tag More...
 
Tag partition_tag ()
 return partitions set tag More...
 
Tag part_tag ()
 
void print_pstatus (unsigned char pstat, std::string &ostr)
 print contents of pstatus value in human-readable form More...
 
void print_pstatus (unsigned char pstat)
 print contents of pstatus value in human-readable form to std::cut More...
 
ErrorCode get_part_entities (Range &ents, int dim=-1)
 return all the entities in parts owned locally More...
 
EntityHandle get_partitioning () const
 
ErrorCode set_partitioning (EntityHandle h)
 
ErrorCode get_global_part_count (int &count_out) const
 
ErrorCode get_part_owner (int part_id, int &owner_out) const
 
ErrorCode get_part_id (EntityHandle part, int &id_out) const
 
ErrorCode get_part_handle (int id, EntityHandle &handle_out) const
 
ErrorCode create_part (EntityHandle &part_out)
 
ErrorCode destroy_part (EntityHandle part)
 
ErrorCode collective_sync_partition ()
 
ErrorCode get_part_neighbor_ids (EntityHandle part, int neighbors_out[MAX_SHARING_PROCS], int &num_neighbors_out)
 
ErrorCode get_interface_sets (EntityHandle part, Range &iface_sets_out, int *adj_part_id=0)
 
ErrorCode get_owning_part (EntityHandle entity, int &owning_part_id_out, EntityHandle *owning_handle=0)
 
ErrorCode get_sharing_parts (EntityHandle entity, int part_ids_out[MAX_SHARING_PROCS], int &num_part_ids_out, EntityHandle remote_handles[MAX_SHARING_PROCS]=0)
 
ErrorCode filter_pstatus (Range &ents, const unsigned char pstatus_val, const unsigned char op, int to_proc=-1, Range *returned_ents=NULL)
 
ErrorCode get_iface_entities (int other_proc, int dim, Range &iface_ents)
 Get entities on interfaces shared with another proc. More...
 
Interfaceget_moab () const
 
ErrorCode clean_shared_tags (std::vector< Range * > &exchange_ents)
 
ErrorCode pack_buffer (Range &orig_ents, const bool adjacencies, const bool tags, const bool store_remote_handles, const int to_proc, Buffer *buff, TupleList *entprocs=NULL, Range *allsent=NULL)
 public 'cuz we want to unit test these externally More...
 
ErrorCode unpack_buffer (unsigned char *buff_ptr, const bool store_remote_handles, const int from_proc, const int ind, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< EntityHandle > &new_ents, const bool created_iface=false)
 
ErrorCode pack_entities (Range &entities, Buffer *buff, const bool store_remote_handles, const int to_proc, const bool is_iface, TupleList *entprocs=NULL, Range *allsent=NULL)
 
ErrorCode unpack_entities (unsigned char *&buff_ptr, const bool store_remote_handles, const int from_ind, const bool is_iface, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< EntityHandle > &new_ents, const bool created_iface=false)
 unpack entities in buff_ptr More...
 
ErrorCode check_all_shared_handles (bool print_em=false)
 Call exchange_all_shared_handles, then compare the results with tag data on local shared entities. More...
 
ErrorCode pack_shared_handles (std::vector< std::vector< SharedEntityData > > &send_data)
 
ErrorCode check_local_shared ()
 
ErrorCode check_my_shared_handles (std::vector< std::vector< SharedEntityData > > &shents, const char *prefix=NULL)
 
void set_rank (unsigned int r)
 set rank for this pcomm; USED FOR TESTING ONLY! More...
 
void set_size (unsigned int r)
 set rank for this pcomm; USED FOR TESTING ONLY! More...
 
int get_buffers (int to_proc, bool *is_new=NULL)
 get (and possibly allocate) buffers for messages to/from to_proc; returns index of to_proc in buffProcs vector; if is_new is non-NULL, sets to whether new buffer was allocated PUBLIC ONLY FOR TESTING! More...
 
const std::vector< unsigned int > & buff_procs () const
 get buff processor vector More...
 
ErrorCode unpack_remote_handles (unsigned int from_proc, unsigned char *&buff_ptr, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p)
 
ErrorCode pack_remote_handles (std::vector< EntityHandle > &L1hloc, std::vector< EntityHandle > &L1hrem, std::vector< int > &procs, unsigned int to_proc, Buffer *buff)
 
ErrorCode create_interface_sets (std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs)
 
ErrorCode create_interface_sets (EntityHandle this_set, int resolve_dim, int shared_dim)
 
ErrorCode tag_shared_verts (TupleList &shared_ents, std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs, Range &proc_verts, unsigned int i_extra=1)
 
ErrorCode list_entities (const EntityHandle *ents, int num_ents)
 
ErrorCode list_entities (const Range &ents)
 
void set_send_request (int n_request)
 
void set_recv_request (int n_request)
 
void reset_all_buffers ()
 reset message buffers to their initial state More...
 
void set_debug_verbosity (int verb)
 set the verbosity level of output from this pcomm More...
 
int get_debug_verbosity ()
 get the verbosity level of output from this pcomm More...
 
ErrorCode gather_data (Range &gather_ents, Tag &tag_handle, Tag id_tag=0, EntityHandle gather_set=0, int root_proc_rank=0)
 
ErrorCode settle_intersection_points (Range &edges, Range &shared_edges_owned, std::vector< std::vector< EntityHandle > * > &extraNodesVec, double tolerance)
 
ErrorCode delete_entities (Range &to_delete)
 
ErrorCode correct_thin_ghost_layers ()
 

Static Public Member Functions

static ParallelCommget_pcomm (Interface *impl, const int index)
 get the indexed pcomm object from the interface More...
 
static ParallelCommget_pcomm (Interface *impl, EntityHandle partitioning, const MPI_Comm *comm=0)
 Get ParallelComm instance associated with partition handle Will create ParallelComm instance if a) one does not already exist and b) a valid value for MPI_Comm is passed. More...
 
static ErrorCode get_all_pcomm (Interface *impl, std::vector< ParallelComm * > &list)
 
static ErrorCode exchange_ghost_cells (ParallelComm **pc, unsigned int num_procs, int ghost_dim, int bridge_dim, int num_layers, int addl_ents, bool store_remote_handles, EntityHandle *file_sets=NULL)
 Static version of exchange_ghost_cells, exchanging info through buffers rather than messages. More...
 
static ErrorCode resolve_shared_ents (ParallelComm **pc, const unsigned int np, EntityHandle this_set, const int to_dim)
 
static Tag pcomm_tag (Interface *impl, bool create_if_missing=true)
 return pcomm tag; static because might not have a pcomm before going to look for one on the interface More...
 
static ErrorCode check_all_shared_handles (ParallelComm **pcs, int num_pcs)
 

Static Public Attributes

static unsigned char PROC_SHARED
 
static unsigned char PROC_OWNER
 
static const unsigned int INITIAL_BUFF_SIZE = 1024
 

Private Member Functions

ErrorCode reduce_void (int tag_data_type, const MPI_Op mpi_op, int num_ents, void *old_vals, void *new_vals)
 
template<class T >
ErrorCode reduce (const MPI_Op mpi_op, int num_ents, void *old_vals, void *new_vals)
 
void print_debug_isend (int from, int to, unsigned char *buff, int tag, int size)
 
void print_debug_irecv (int to, int from, unsigned char *buff, int size, int tag, int incoming)
 
void print_debug_recd (MPI_Status status)
 
void print_debug_waitany (std::vector< MPI_Request > &reqs, int tag, int proc)
 
void initialize ()
 
ErrorCode set_sharing_data (EntityHandle ent, unsigned char pstatus, int old_nump, int new_nump, int *ps, EntityHandle *hs)
 
ErrorCode check_clean_iface (Range &allsent)
 
void define_mpe ()
 
ErrorCode get_sent_ents (const bool is_iface, const int bridge_dim, const int ghost_dim, const int num_layers, const int addl_ents, Range *sent_ents, Range &allsent, TupleList &entprocs)
 
ErrorCode set_pstatus_entities (Range &pstatus_ents, unsigned char pstatus_val, bool lower_dim_ents=false, bool verts_too=true, int operation=Interface::UNION)
 Set pstatus values on entities. More...
 
ErrorCode set_pstatus_entities (EntityHandle *pstatus_ents, int num_ents, unsigned char pstatus_val, bool lower_dim_ents=false, bool verts_too=true, int operation=Interface::UNION)
 Set pstatus values on entities (vector-based function) More...
 
int estimate_ents_buffer_size (Range &entities, const bool store_remote_handles)
 estimate size required to pack entities More...
 
int estimate_sets_buffer_size (Range &entities, const bool store_remote_handles)
 estimate size required to pack sets More...
 
ErrorCode send_buffer (const unsigned int to_proc, Buffer *send_buff, const int msg_tag, MPI_Request &send_req, MPI_Request &ack_recv_req, int *ack_buff, int &this_incoming, int next_mesg_tag=-1, Buffer *next_recv_buff=NULL, MPI_Request *next_recv_req=NULL, int *next_incoming=NULL)
 send the indicated buffer, possibly sending size first More...
 
ErrorCode recv_buffer (int mesg_tag_expected, const MPI_Status &mpi_status, Buffer *recv_buff, MPI_Request &recv_2nd_req, MPI_Request &ack_req, int &this_incoming, Buffer *send_buff, MPI_Request &send_req, MPI_Request &sent_ack_req, bool &done, Buffer *next_buff=NULL, int next_tag=-1, MPI_Request *next_req=NULL, int *next_incoming=NULL)
 process incoming message; if longer than the initial size, post recv for next part then send ack; if ack, send second part; else indicate that we're done and buffer is ready for processing More...
 
ErrorCode pack_entity_seq (const int nodes_per_entity, const bool store_remote_handles, const int to_proc, Range &these_ents, std::vector< EntityHandle > &entities, Buffer *buff)
 pack a range of entities with equal # verts per entity, along with the range on the sending proc More...
 
ErrorCode print_buffer (unsigned char *buff_ptr, int mesg_type, int from_proc, bool sent)
 
ErrorCode unpack_iface_entities (unsigned char *&buff_ptr, const int from_proc, const int ind, std::vector< EntityHandle > &recd_ents)
 for all the entities in the received buffer; for each, save entities in this instance which match connectivity, or zero if none found More...
 
ErrorCode pack_sets (Range &entities, Buffer *buff, const bool store_handles, const int to_proc)
 
ErrorCode unpack_sets (unsigned char *&buff_ptr, std::vector< EntityHandle > &entities, const bool store_handles, const int to_proc)
 
ErrorCode pack_adjacencies (Range &entities, Range::const_iterator &start_rit, Range &whole_range, unsigned char *&buff_ptr, int &count, const bool just_count, const bool store_handles, const int to_proc)
 
ErrorCode unpack_adjacencies (unsigned char *&buff_ptr, Range &entities, const bool store_handles, const int from_proc)
 
ErrorCode unpack_remote_handles (unsigned int from_proc, const unsigned char *buff_ptr, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p)
 
ErrorCode find_existing_entity (const bool is_iface, const int owner_p, const EntityHandle owner_h, const int num_ents, const EntityHandle *connect, const int num_connect, const EntityType this_type, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, EntityHandle &new_h)
 given connectivity and type, find an existing entity, if there is one More...
 
ErrorCode build_sharedhps_list (const EntityHandle entity, const unsigned char pstatus, const int sharedp, const std::set< unsigned int > &procs, unsigned int &num_ents, int *tmp_procs, EntityHandle *tmp_handles)
 
ErrorCode get_tag_send_list (const Range &all_entities, std::vector< Tag > &all_tags, std::vector< Range > &tag_ranges)
 Get list of tags for which to exchange data. More...
 
ErrorCode pack_tags (Range &entities, const std::vector< Tag > &src_tags, const std::vector< Tag > &dst_tags, const std::vector< Range > &tag_ranges, Buffer *buff, const bool store_handles, const int to_proc)
 Serialize entity tag data. More...
 
ErrorCode packed_tag_size (Tag source_tag, const Range &entities, int &count_out)
 Calculate buffer size required to pack tag data. More...
 
ErrorCode pack_tag (Tag source_tag, Tag destination_tag, const Range &entities, const std::vector< EntityHandle > &whole_range, Buffer *buff, const bool store_remote_handles, const int to_proc)
 Serialize tag data. More...
 
ErrorCode unpack_tags (unsigned char *&buff_ptr, std::vector< EntityHandle > &entities, const bool store_handles, const int to_proc, const MPI_Op *const mpi_op=NULL)
 
ErrorCode tag_shared_verts (TupleList &shared_verts, Range *skin_ents, std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs, Range &proc_verts)
 
ErrorCode get_proc_nvecs (int resolve_dim, int shared_dim, Range *skin_ents, std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs)
 
ErrorCode create_iface_pc_links ()
 
ErrorCode pack_range_map (Range &this_range, EntityHandle actual_start, HandleMap &handle_map)
 pack a range map with keys in this_range and values a contiguous series of handles starting at actual_start More...
 
bool is_iface_proc (EntityHandle this_set, int to_proc)
 returns true if the set is an interface shared with to_proc More...
 
ErrorCode update_iface_sets (Range &sent_ents, std::vector< EntityHandle > &remote_handles, int from_proc)
 for any remote_handles set to zero, remove corresponding sent_ents from iface_sets corresponding to from_proc More...
 
ErrorCode get_ghosted_entities (int bridge_dim, int ghost_dim, int to_proc, int num_layers, int addl_ents, Range &ghosted_ents)
 for specified bridge/ghost dimension, to_proc, and number of layers, get the entities to be ghosted, and info on additional procs needing to communicate with to_proc More...
 
ErrorCode add_verts (Range &sent_ents)
 add vertices adjacent to entities in this list More...
 
ErrorCode exchange_all_shared_handles (std::vector< std::vector< SharedEntityData > > &send_data, std::vector< std::vector< SharedEntityData > > &result)
 Every processor sends shared entity handle data to every other processor that it shares entities with. Passed back map is all received data, indexed by processor ID. This function is intended to be used for debugging. More...
 
ErrorCode get_remote_handles (const bool store_remote_handles, EntityHandle *from_vec, EntityHandle *to_vec_tmp, int num_ents, int to_proc, const std::vector< EntityHandle > &new_ents)
 replace handles in from_vec with corresponding handles on to_proc (by checking shared[p/h]_tag and shared[p/h]s_tag; if no remote handle and new_ents is non-null, substitute instead CREATE_HANDLE(MBMAXTYPE, index) where index is handle's position in new_ents More...
 
ErrorCode get_remote_handles (const bool store_remote_handles, const Range &from_range, Range &to_range, int to_proc, const std::vector< EntityHandle > &new_ents)
 same as other version, except from_range and to_range should be different here More...
 
ErrorCode get_remote_handles (const bool store_remote_handles, const Range &from_range, EntityHandle *to_vec, int to_proc, const std::vector< EntityHandle > &new_ents)
 same as other version, except packs range into vector More...
 
ErrorCode get_local_handles (EntityHandle *from_vec, int num_ents, const Range &new_ents)
 goes through from_vec, and for any with type MBMAXTYPE, replaces with new_ents value at index corresponding to id of entity in from_vec More...
 
ErrorCode get_local_handles (const Range &remote_handles, Range &local_handles, const std::vector< EntityHandle > &new_ents)
 same as above except puts results in range More...
 
ErrorCode get_local_handles (EntityHandle *from_vec, int num_ents, const std::vector< EntityHandle > &new_ents)
 same as above except gets new_ents from vector More...
 
ErrorCode update_remote_data (Range &local_range, Range &remote_range, int other_proc, const unsigned char add_pstat)
 
ErrorCode update_remote_data (const EntityHandle new_h, const int *ps, const EntityHandle *hs, const int num_ps, const unsigned char add_pstat)
 
ErrorCode update_remote_data_old (const EntityHandle new_h, const int *ps, const EntityHandle *hs, const int num_ps, const unsigned char add_pstat)
 
ErrorCode tag_iface_entities ()
 Set pstatus tag interface bit on entities in sets passed in. More...
 
int add_pcomm (ParallelComm *pc)
 add a pc to the iface instance tag PARALLEL_COMM More...
 
void remove_pcomm (ParallelComm *pc)
 remove a pc from the iface instance tag PARALLEL_COMM More...
 
ErrorCode check_sent_ents (Range &allsent)
 check entities to make sure there are no zero-valued remote handles where they shouldn't be More...
 
ErrorCode assign_entities_part (std::vector< EntityHandle > &entities, const int proc)
 assign entities to the input processor part More...
 
ErrorCode remove_entities_part (Range &entities, const int proc)
 remove entities to the input processor part More...
 
void delete_all_buffers ()
 reset message buffers to their initial state More...
 

Private Attributes

InterfacembImpl
 MB interface associated with this writer. More...
 
ProcConfig procConfig
 Proc config object, keeps info on parallel stuff. More...
 
SequenceManagersequenceManager
 Sequence manager, to get more efficient access to entities. More...
 
ErrorerrorHandler
 Error handler. More...
 
std::vector< Buffer * > localOwnedBuffs
 more data buffers, proc-specific More...
 
std::vector< Buffer * > remoteOwnedBuffs
 
std::vector< MPI_Request > sendReqs
 request objects, may be used if store_remote_handles is used More...
 
std::vector< MPI_Request > recvReqs
 receive request objects More...
 
std::vector< MPI_Request > recvRemotehReqs
 
std::vector< unsigned int > buffProcs
 processor rank for each buffer index More...
 
Range partitionSets
 the partition, interface sets for this comm'n instance More...
 
Range interfaceSets
 
std::set< EntityHandlesharedEnts
 all local entities shared with others, whether ghost or ghosted More...
 
Tag sharedpTag
 tags used to save sharing procs and handles More...
 
Tag sharedpsTag
 
Tag sharedhTag
 
Tag sharedhsTag
 
Tag pstatusTag
 
Tag ifaceSetsTag
 
Tag partitionTag
 
int globalPartCount
 Cache of global part count. More...
 
EntityHandle partitioningSet
 entity set containing all parts More...
 
std::ofstream myFile
 
int pcommID
 
int ackbuff
 
DebugOutputmyDebug
 used to set verbosity level and to report output More...
 
SharedSetDatasharedSetData
 Data about shared sets. More...
 

Friends

class ParallelMergeMesh
 

Detailed Description

Parallel communications in MOAB.

Author
Tim Tautges

This class implements methods to communicate mesh between processors

Examples
ComputeTriDual.cpp, and LaplacianSmoother.cpp.

Definition at line 54 of file ParallelComm.hpp.

Constructor & Destructor Documentation

◆ ParallelComm() [1/2]

moab::ParallelComm::ParallelComm ( Interface impl,
MPI_Comm  comm,
int *  pcomm_id_out = 0 
)

constructor

Definition at line 313 of file ParallelComm.cpp.

314  : mbImpl( impl ), procConfig( cm ), sharedpTag( 0 ), sharedpsTag( 0 ), sharedhTag( 0 ), sharedhsTag( 0 ),
316  myDebug( NULL )
317 {
318  initialize();
319  sharedSetData = new SharedSetData( *impl, pcommID, procConfig.proc_rank() );
320  if( id ) *id = pcommID;
321 }

References initialize(), pcommID, moab::ProcConfig::proc_rank(), procConfig, and sharedSetData.

Referenced by get_pcomm().

◆ ParallelComm() [2/2]

moab::ParallelComm::ParallelComm ( Interface impl,
std::vector< unsigned char > &  tmp_buff,
MPI_Comm  comm,
int *  pcomm_id_out = 0 
)

constructor taking packed buffer, for testing

Definition at line 323 of file ParallelComm.cpp.

324  : mbImpl( impl ), procConfig( cm ), sharedpTag( 0 ), sharedpsTag( 0 ), sharedhTag( 0 ), sharedhsTag( 0 ),
326  myDebug( NULL )
327 {
328  initialize();
329  sharedSetData = new SharedSetData( *impl, pcommID, procConfig.proc_rank() );
330  if( id ) *id = pcommID;
331 }

References initialize(), pcommID, moab::ProcConfig::proc_rank(), procConfig, and sharedSetData.

◆ ~ParallelComm()

moab::ParallelComm::~ParallelComm ( )

destructor

Definition at line 333 of file ParallelComm.cpp.

334 {
335  remove_pcomm( this );
337  delete myDebug;
338  delete sharedSetData;
339 }

References delete_all_buffers(), myDebug, remove_pcomm(), and sharedSetData.

Member Function Documentation

◆ add_pcomm()

int moab::ParallelComm::add_pcomm ( ParallelComm pc)
private

add a pc to the iface instance tag PARALLEL_COMM

Definition at line 374 of file ParallelComm.cpp.

375 {
376  // Add this pcomm to instance tag
377  std::vector< ParallelComm* > pc_array( MAX_SHARING_PROCS, (ParallelComm*)NULL );
378  Tag pc_tag = pcomm_tag( mbImpl, true );
379  assert( 0 != pc_tag );
380 
381  const EntityHandle root = 0;
382  ErrorCode result = mbImpl->tag_get_data( pc_tag, &root, 1, (void*)&pc_array[0] );
383  if( MB_SUCCESS != result && MB_TAG_NOT_FOUND != result ) return -1;
384  int index = 0;
385  while( index < MAX_SHARING_PROCS && pc_array[index] )
386  index++;
387  if( index == MAX_SHARING_PROCS )
388  {
389  index = -1;
390  assert( false );
391  }
392  else
393  {
394  pc_array[index] = pc;
395  mbImpl->tag_set_data( pc_tag, &root, 1, (void*)&pc_array[0] );
396  }
397  return index;
398 }

References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, MB_TAG_NOT_FOUND, mbImpl, pcomm_tag(), moab::Interface::tag_get_data(), and moab::Interface::tag_set_data().

Referenced by initialize().

◆ add_verts()

ErrorCode moab::ParallelComm::add_verts ( Range sent_ents)
private

add vertices adjacent to entities in this list

Definition at line 7502 of file ParallelComm.cpp.

7503 {
7504  // Get the verts adj to these entities, since we'll have to send those too
7505 
7506  // First check sets
7507  std::pair< Range::const_iterator, Range::const_iterator > set_range = sent_ents.equal_range( MBENTITYSET );
7508  ErrorCode result = MB_SUCCESS, tmp_result;
7509  for( Range::const_iterator rit = set_range.first; rit != set_range.second; ++rit )
7510  {
7511  tmp_result = mbImpl->get_entities_by_type( *rit, MBVERTEX, sent_ents );MB_CHK_SET_ERR( tmp_result, "Failed to get contained verts" );
7512  }
7513 
7514  // Now non-sets
7515  Range tmp_ents;
7516  std::copy( sent_ents.begin(), set_range.first, range_inserter( tmp_ents ) );
7517  result = mbImpl->get_adjacencies( tmp_ents, 0, false, sent_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get vertices adj to ghosted ents" );
7518 
7519  // if polyhedra, need to add all faces from there
7520  Range polyhedra = sent_ents.subset_by_type( MBPOLYHEDRON );
7521  // get all faces adjacent to every polyhedra
7522  result = mbImpl->get_connectivity( polyhedra, sent_ents );MB_CHK_SET_ERR( result, "Failed to get polyhedra faces" );
7523  return result;
7524 }

References moab::Range::begin(), moab::Range::equal_range(), ErrorCode, moab::Interface::get_adjacencies(), moab::Interface::get_connectivity(), moab::Interface::get_entities_by_type(), MB_CHK_SET_ERR, MB_SUCCESS, MBENTITYSET, mbImpl, MBPOLYHEDRON, MBVERTEX, moab::Range::subset_by_type(), and moab::Interface::UNION.

Referenced by broadcast_entities(), exchange_owned_mesh(), get_ghosted_entities(), scatter_entities(), and send_entities().

◆ assign_entities_part()

ErrorCode moab::ParallelComm::assign_entities_part ( std::vector< EntityHandle > &  entities,
const int  proc 
)
private

assign entities to the input processor part

Definition at line 7297 of file ParallelComm.cpp.

7298 {
7299  EntityHandle part_set;
7300  ErrorCode result = get_part_handle( proc, part_set );MB_CHK_SET_ERR( result, "Failed to get part handle" );
7301 
7302  if( part_set > 0 )
7303  {
7304  result = mbImpl->add_entities( part_set, &entities[0], entities.size() );MB_CHK_SET_ERR( result, "Failed to add entities to part set" );
7305  }
7306 
7307  return MB_SUCCESS;
7308 }

References moab::Interface::add_entities(), entities, ErrorCode, get_part_handle(), MB_CHK_SET_ERR, MB_SUCCESS, and mbImpl.

Referenced by exchange_owned_mesh(), and recv_entities().

◆ assign_global_ids() [1/2]

ErrorCode moab::ParallelComm::assign_global_ids ( EntityHandle  this_set,
const int  dimension,
const int  start_id = 1,
const bool  largest_dim_only = true,
const bool  parallel = true,
const bool  owned_only = false 
)

assign a global id space, for largest-dimension or all entities (and in either case for vertices too)

Assign a global id space, for largest-dimension or all entities (and in either case for vertices too)

Parameters
owned_onlyIf true, do not get global IDs for non-owned entities from remote processors.
Examples
ComputeTriDual.cpp.

Definition at line 421 of file ParallelComm.cpp.

427 {
428  Range entities[4];
429  ErrorCode result;
430  std::vector< unsigned char > pstatus;
431  for( int dim = 0; dim <= dimension; dim++ )
432  {
433  if( dim == 0 || !largest_dim_only || dim == dimension )
434  {
435  result = mbImpl->get_entities_by_dimension( this_set, dim, entities[dim] );MB_CHK_SET_ERR( result, "Failed to get vertices in assign_global_ids" );
436  }
437 
438  // Need to filter out non-locally-owned entities!!!
439  pstatus.resize( entities[dim].size() );
440  result = mbImpl->tag_get_data( pstatus_tag(), entities[dim], &pstatus[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus in assign_global_ids" );
441 
442  Range dum_range;
443  Range::iterator rit;
444  unsigned int i;
445  for( rit = entities[dim].begin(), i = 0; rit != entities[dim].end(); ++rit, i++ )
446  if( pstatus[i] & PSTATUS_NOT_OWNED ) dum_range.insert( *rit );
447  entities[dim] = subtract( entities[dim], dum_range );
448  }
449 
450  return assign_global_ids( entities, dimension, start_id, parallel, owned_only );
451 }

References dim, entities, ErrorCode, moab::Interface::get_entities_by_dimension(), moab::Range::insert(), MB_CHK_SET_ERR, mbImpl, PSTATUS_NOT_OWNED, pstatus_tag(), size(), moab::subtract(), and moab::Interface::tag_get_data().

Referenced by check_global_ids(), compute_dual_mesh(), moab::NCHelperDomain::create_mesh(), moab::NCHelperScrip::create_mesh(), main(), and resolve_shared_ents().

◆ assign_global_ids() [2/2]

ErrorCode moab::ParallelComm::assign_global_ids ( Range  entities[],
const int  dimension,
const int  start_id,
const bool  parallel,
const bool  owned_only 
)

assign a global id space, for largest-dimension or all entities (and in either case for vertices too)

Assign a global id space, for largest-dimension or all entities (and in either case for vertices too)

Definition at line 455 of file ParallelComm.cpp.

460 {
461  int local_num_elements[4];
462  ErrorCode result;
463  for( int dim = 0; dim <= dimension; dim++ )
464  {
465  local_num_elements[dim] = entities[dim].size();
466  }
467 
468  // Communicate numbers
469  std::vector< int > num_elements( procConfig.proc_size() * 4 );
470 #ifdef MOAB_HAVE_MPI
471  if( procConfig.proc_size() > 1 && parallel )
472  {
473  int retval =
474  MPI_Allgather( local_num_elements, 4, MPI_INT, &num_elements[0], 4, MPI_INT, procConfig.proc_comm() );
475  if( 0 != retval ) return MB_FAILURE;
476  }
477  else
478 #endif
479  for( int dim = 0; dim < 4; dim++ )
480  num_elements[dim] = local_num_elements[dim];
481 
482  // My entities start at one greater than total_elems[d]
483  int total_elems[4] = { start_id, start_id, start_id, start_id };
484 
485  for( unsigned int proc = 0; proc < procConfig.proc_rank(); proc++ )
486  {
487  for( int dim = 0; dim < 4; dim++ )
488  total_elems[dim] += num_elements[4 * proc + dim];
489  }
490 
491  // Assign global ids now
492  Tag gid_tag = mbImpl->globalId_tag();
493 
494  for( int dim = 0; dim < 4; dim++ )
495  {
496  if( entities[dim].empty() ) continue;
497  num_elements.resize( entities[dim].size() );
498  int i = 0;
499  for( Range::iterator rit = entities[dim].begin(); rit != entities[dim].end(); ++rit )
500  num_elements[i++] = total_elems[dim]++;
501 
502  result = mbImpl->tag_set_data( gid_tag, entities[dim], &num_elements[0] );MB_CHK_SET_ERR( result, "Failed to set global id tag in assign_global_ids" );
503  }
504 
505  if( owned_only ) return MB_SUCCESS;
506 
507  // Exchange tags
508  for( int dim = 1; dim < 4; dim++ )
509  entities[0].merge( entities[dim] );
510 
511  return exchange_tags( gid_tag, entities[0] );
512 }

References dim, entities, ErrorCode, exchange_tags(), moab::Interface::globalId_tag(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), moab::ProcConfig::proc_size(), procConfig, size(), and moab::Interface::tag_set_data().

◆ augment_default_sets_with_ghosts()

ErrorCode moab::ParallelComm::augment_default_sets_with_ghosts ( EntityHandle  file_set)

extend shared sets with ghost entities After ghosting, ghost entities do not have yet information about the material set, partition set, Neumann or Dirichlet set they could belong to This method will assign ghosted entities to the those special entity sets In some case we might even have to create those sets, if they do not exist yet on the local processor

The special entity sets all have an unique identifier, in a form of an integer tag to the set. The shared sets data is not used, because we do not use the geometry sets, as they are not uniquely identified

Parameters
file_set: file set used per application

Definition at line 4766 of file ParallelComm.cpp.

4767 {
4768  // gather all default sets we are interested in, material, neumann, etc
4769  // we will skip geometry sets, because they are not uniquely identified with their tag value
4770  // maybe we will add another tag, like category
4771 
4772  if( procConfig.proc_size() < 2 ) return MB_SUCCESS; // no reason to stop by
4773  const char* const shared_set_tag_names[] = { MATERIAL_SET_TAG_NAME, DIRICHLET_SET_TAG_NAME, NEUMANN_SET_TAG_NAME,
4775 
4776  int num_tags = sizeof( shared_set_tag_names ) / sizeof( shared_set_tag_names[0] );
4777 
4778  Range* rangeSets = new Range[num_tags];
4779  Tag* tags = new Tag[num_tags + 1]; // one extra for global id tag, which is an int, so far
4780 
4781  int my_rank = rank();
4782  int** tagVals = new int*[num_tags];
4783  for( int i = 0; i < num_tags; i++ )
4784  tagVals[i] = NULL;
4785  ErrorCode rval;
4786 
4787  // for each tag, we keep a local map, from the value to the actual set with that value
4788  // we assume that the tag values are unique, for a given set, otherwise we
4789  // do not know to which set to add the entity
4790 
4791  typedef std::map< int, EntityHandle > MVal;
4792  typedef std::map< int, EntityHandle >::iterator itMVal;
4793  MVal* localMaps = new MVal[num_tags];
4794 
4795  for( int i = 0; i < num_tags; i++ )
4796  {
4797 
4798  rval = mbImpl->tag_get_handle( shared_set_tag_names[i], 1, MB_TYPE_INTEGER, tags[i], MB_TAG_ANY );
4799  if( MB_SUCCESS != rval ) continue;
4800  rval = mbImpl->get_entities_by_type_and_tag( file_set, MBENTITYSET, &( tags[i] ), 0, 1, rangeSets[i],
4801  Interface::UNION );MB_CHK_SET_ERR( rval, "can't get sets with a tag" );
4802 
4803  if( rangeSets[i].size() > 0 )
4804  {
4805  tagVals[i] = new int[rangeSets[i].size()];
4806  // fill up with the tag values
4807  rval = mbImpl->tag_get_data( tags[i], rangeSets[i], tagVals[i] );MB_CHK_SET_ERR( rval, "can't get set tag values" );
4808  // now for inverse mapping:
4809  for( int j = 0; j < (int)rangeSets[i].size(); j++ )
4810  {
4811  localMaps[i][tagVals[i][j]] = rangeSets[i][j];
4812  }
4813  }
4814  }
4815  // get the global id tag too
4816  tags[num_tags] = mbImpl->globalId_tag();
4817 
4818  TupleList remoteEnts;
4819  // processor to send to, type of tag (0-mat,) tag value, remote handle
4820  // 1-diri
4821  // 2-neum
4822  // 3-part
4823  //
4824  int initialSize = (int)sharedEnts.size(); // estimate that on average, each shared ent
4825  // will be sent to one processor, for one tag
4826  // we will actually send only entities that are owned locally, and from those
4827  // only those that do have a special tag (material, neumann, etc)
4828  // if we exceed the capacity, we resize the tuple
4829  remoteEnts.initialize( 3, 0, 1, 0, initialSize );
4830  remoteEnts.enableWriteAccess();
4831 
4832  // now, for each owned entity, get the remote handle(s) and Proc(s), and verify if it
4833  // belongs to one of the sets; if yes, create a tuple and append it
4834 
4835  std::set< EntityHandle > own_and_sha;
4836  int ir = 0, jr = 0;
4837  for( std::set< EntityHandle >::iterator vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit )
4838  {
4839  // ghosted eh
4840  EntityHandle geh = *vit;
4841  if( own_and_sha.find( geh ) != own_and_sha.end() ) // already encountered
4842  continue;
4843  int procs[MAX_SHARING_PROCS];
4845  int nprocs;
4846  unsigned char pstat;
4847  rval = get_sharing_data( geh, procs, handles, pstat, nprocs );
4848  if( rval != MB_SUCCESS )
4849  {
4850  for( int i = 0; i < num_tags; i++ )
4851  delete[] tagVals[i];
4852  delete[] tagVals;
4853 
4854  MB_CHK_SET_ERR( rval, "Failed to get sharing data" );
4855  }
4856  if( pstat & PSTATUS_NOT_OWNED ) continue; // we will send info only for entities that we own
4857  own_and_sha.insert( geh );
4858  for( int i = 0; i < num_tags; i++ )
4859  {
4860  for( int j = 0; j < (int)rangeSets[i].size(); j++ )
4861  {
4862  EntityHandle specialSet = rangeSets[i][j]; // this set has tag i, value tagVals[i][j];
4863  if( mbImpl->contains_entities( specialSet, &geh, 1 ) )
4864  {
4865  // this ghosted entity is in a special set, so form the tuple
4866  // to send to the processors that do not own this
4867  for( int k = 0; k < nprocs; k++ )
4868  {
4869  if( procs[k] != my_rank )
4870  {
4871  if( remoteEnts.get_n() >= remoteEnts.get_max() - 1 )
4872  {
4873  // resize, so we do not overflow
4874  int oldSize = remoteEnts.get_max();
4875  // increase with 50% the capacity
4876  remoteEnts.resize( oldSize + oldSize / 2 + 1 );
4877  }
4878  remoteEnts.vi_wr[ir++] = procs[k]; // send to proc
4879  remoteEnts.vi_wr[ir++] = i; // for the tags [i] (0-3)
4880  remoteEnts.vi_wr[ir++] = tagVals[i][j]; // actual value of the tag
4881  remoteEnts.vul_wr[jr++] = handles[k];
4882  remoteEnts.inc_n();
4883  }
4884  }
4885  }
4886  }
4887  }
4888  // if the local entity has a global id, send it too, so we avoid
4889  // another "exchange_tags" for global id
4890  int gid;
4891  rval = mbImpl->tag_get_data( tags[num_tags], &geh, 1, &gid );MB_CHK_SET_ERR( rval, "Failed to get global id" );
4892  if( gid != 0 )
4893  {
4894  for( int k = 0; k < nprocs; k++ )
4895  {
4896  if( procs[k] != my_rank )
4897  {
4898  if( remoteEnts.get_n() >= remoteEnts.get_max() - 1 )
4899  {
4900  // resize, so we do not overflow
4901  int oldSize = remoteEnts.get_max();
4902  // increase with 50% the capacity
4903  remoteEnts.resize( oldSize + oldSize / 2 + 1 );
4904  }
4905  remoteEnts.vi_wr[ir++] = procs[k]; // send to proc
4906  remoteEnts.vi_wr[ir++] = num_tags; // for the tags [j] (4)
4907  remoteEnts.vi_wr[ir++] = gid; // actual value of the tag
4908  remoteEnts.vul_wr[jr++] = handles[k];
4909  remoteEnts.inc_n();
4910  }
4911  }
4912  }
4913  }
4914 
4915 #ifndef NDEBUG
4916  if( my_rank == 1 && 1 == get_debug_verbosity() ) remoteEnts.print( " on rank 1, before augment routing" );
4917  MPI_Barrier( procConfig.proc_comm() );
4918  int sentEnts = remoteEnts.get_n();
4919  assert( ( sentEnts == jr ) && ( 3 * sentEnts == ir ) );
4920 #endif
4921  // exchange the info now, and send to
4922  gs_data::crystal_data* cd = this->procConfig.crystal_router();
4923  // All communication happens here; no other mpi calls
4924  // Also, this is a collective call
4925  rval = cd->gs_transfer( 1, remoteEnts, 0 );MB_CHK_SET_ERR( rval, "Error in tuple transfer" );
4926 #ifndef NDEBUG
4927  if( my_rank == 0 && 1 == get_debug_verbosity() ) remoteEnts.print( " on rank 0, after augment routing" );
4928  MPI_Barrier( procConfig.proc_comm() );
4929 #endif
4930 
4931  // now process the data received from other processors
4932  int received = remoteEnts.get_n();
4933  for( int i = 0; i < received; i++ )
4934  {
4935  // int from = ents_to_delete.vi_rd[i];
4936  EntityHandle geh = (EntityHandle)remoteEnts.vul_rd[i];
4937  int from_proc = remoteEnts.vi_rd[3 * i];
4938  if( my_rank == from_proc )
4939  std::cout << " unexpected receive from my rank " << my_rank << " during augmenting with ghosts\n ";
4940  int tag_type = remoteEnts.vi_rd[3 * i + 1];
4941  assert( ( 0 <= tag_type ) && ( tag_type <= num_tags ) );
4942  int value = remoteEnts.vi_rd[3 * i + 2];
4943  if( tag_type == num_tags )
4944  {
4945  // it is global id
4946  rval = mbImpl->tag_set_data( tags[num_tags], &geh, 1, &value );MB_CHK_SET_ERR( rval, "Error in setting gid tag" );
4947  }
4948  else
4949  {
4950  // now, based on value and tag type, see if we have that value in the map
4951  MVal& lmap = localMaps[tag_type];
4952  itMVal itm = lmap.find( value );
4953  if( itm == lmap.end() )
4954  {
4955  // the value was not found yet in the local map, so we have to create the set
4956  EntityHandle newSet;
4957  rval = mbImpl->create_meshset( MESHSET_SET, newSet );MB_CHK_SET_ERR( rval, "can't create new set" );
4958  lmap[value] = newSet;
4959  // set the tag value
4960  rval = mbImpl->tag_set_data( tags[tag_type], &newSet, 1, &value );MB_CHK_SET_ERR( rval, "can't set tag for new set" );
4961 
4962  // we also need to add the new created set to the file set, if not null
4963  if( file_set )
4964  {
4965  rval = mbImpl->add_entities( file_set, &newSet, 1 );MB_CHK_SET_ERR( rval, "can't add new set to the file set" );
4966  }
4967  }
4968  // add the entity to the set pointed to by the map
4969  rval = mbImpl->add_entities( lmap[value], &geh, 1 );MB_CHK_SET_ERR( rval, "can't add ghost ent to the set" );
4970  }
4971  }
4972 
4973  for( int i = 0; i < num_tags; i++ )
4974  delete[] tagVals[i];
4975  delete[] tagVals;
4976  delete[] rangeSets;
4977  delete[] tags;
4978  delete[] localMaps;
4979  return MB_SUCCESS;
4980 }

References moab::Interface::add_entities(), moab::Interface::contains_entities(), moab::Interface::create_meshset(), moab::ProcConfig::crystal_router(), DIRICHLET_SET_TAG_NAME, moab::TupleList::enableWriteAccess(), ErrorCode, get_debug_verbosity(), moab::Interface::get_entities_by_type_and_tag(), moab::TupleList::get_max(), moab::TupleList::get_n(), get_sharing_data(), moab::Interface::globalId_tag(), moab::TupleList::inc_n(), moab::TupleList::initialize(), MATERIAL_SET_TAG_NAME, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_ANY, MB_TYPE_INTEGER, MBENTITYSET, mbImpl, MESHSET_SET, NEUMANN_SET_TAG_NAME, PARALLEL_PARTITION_TAG_NAME, moab::TupleList::print(), moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_size(), procConfig, PSTATUS_NOT_OWNED, rank(), moab::TupleList::resize(), sharedEnts, moab::Range::size(), size(), moab::Interface::tag_get_data(), moab::Interface::tag_get_handle(), moab::Interface::tag_set_data(), moab::Interface::UNION, moab::TupleList::vi_rd, moab::TupleList::vi_wr, moab::TupleList::vul_rd, and moab::TupleList::vul_wr.

Referenced by moab::ReadParallel::load_file().

◆ broadcast_entities()

ErrorCode moab::ParallelComm::broadcast_entities ( const int  from_proc,
Range entities,
const bool  adjacencies = false,
const bool  tags = true 
)

Broadcast all entities resident on from_proc to other processors This function assumes remote handles are not being stored, since (usually) every processor will know about the whole mesh.

Parameters
from_procProcessor having the mesh to be broadcast
entitiesOn return, the entities sent or received in this call
adjacenciesIf true, adjacencies are sent for equiv entities (currently unsupported)
tagsIf true, all non-default-valued tags are sent for sent entities

Definition at line 536 of file ParallelComm.cpp.

540 {
541 #ifndef MOAB_HAVE_MPI
542  return MB_FAILURE;
543 #else
544 
545  ErrorCode result = MB_SUCCESS;
546  int success;
547  int buff_size;
548 
549  Buffer buff( INITIAL_BUFF_SIZE );
550  buff.reset_ptr( sizeof( int ) );
551  if( (int)procConfig.proc_rank() == from_proc )
552  {
553  result = add_verts( entities );MB_CHK_SET_ERR( result, "Failed to add adj vertices" );
554 
555  buff.reset_ptr( sizeof( int ) );
556  result = pack_buffer( entities, adjacencies, tags, false, -1, &buff );MB_CHK_SET_ERR( result, "Failed to compute buffer size in broadcast_entities" );
557  buff.set_stored_size();
558  buff_size = buff.buff_ptr - buff.mem_ptr;
559  }
560 
561  success = MPI_Bcast( &buff_size, 1, MPI_INT, from_proc, procConfig.proc_comm() );
562  if( MPI_SUCCESS != success )
563  {
564  MB_SET_ERR( MB_FAILURE, "MPI_Bcast of buffer size failed" );
565  }
566 
567  if( !buff_size ) // No data
568  return MB_SUCCESS;
569 
570  if( (int)procConfig.proc_rank() != from_proc ) buff.reserve( buff_size );
571 
572  size_t offset = 0;
573  while( buff_size )
574  {
575  int sz = std::min( buff_size, MAX_BCAST_SIZE );
576  success = MPI_Bcast( buff.mem_ptr + offset, sz, MPI_UNSIGNED_CHAR, from_proc, procConfig.proc_comm() );
577  if( MPI_SUCCESS != success )
578  {
579  MB_SET_ERR( MB_FAILURE, "MPI_Bcast of buffer failed" );
580  }
581 
582  offset += sz;
583  buff_size -= sz;
584  }
585 
586  if( (int)procConfig.proc_rank() != from_proc )
587  {
588  std::vector< std::vector< EntityHandle > > dum1a, dum1b;
589  std::vector< std::vector< int > > dum1p;
590  std::vector< EntityHandle > dum2, dum4;
591  std::vector< unsigned int > dum3;
592  buff.reset_ptr( sizeof( int ) );
593  result = unpack_buffer( buff.buff_ptr, false, from_proc, -1, dum1a, dum1b, dum1p, dum2, dum2, dum3, dum4 );MB_CHK_SET_ERR( result, "Failed to unpack buffer in broadcast_entities" );
594  std::copy( dum4.begin(), dum4.end(), range_inserter( entities ) );
595  }
596 
597  return MB_SUCCESS;
598 #endif
599 }

References add_verts(), moab::ParallelComm::Buffer::buff_ptr, entities, ErrorCode, INITIAL_BUFF_SIZE, moab::MAX_BCAST_SIZE, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, moab::ParallelComm::Buffer::mem_ptr, pack_buffer(), moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, moab::ParallelComm::Buffer::reserve(), moab::ParallelComm::Buffer::reset_ptr(), moab::ParallelComm::Buffer::set_stored_size(), and unpack_buffer().

Referenced by moab::ReadParallel::load_file().

◆ buff_procs()

const std::vector< unsigned int > & moab::ParallelComm::buff_procs ( ) const
inline

get buff processor vector

Definition at line 1569 of file ParallelComm.hpp.

1570 {
1571  return buffProcs;
1572 }

References buffProcs.

◆ build_sharedhps_list()

ErrorCode moab::ParallelComm::build_sharedhps_list ( const EntityHandle  entity,
const unsigned char  pstatus,
const int  sharedp,
const std::set< unsigned int > &  procs,
unsigned int &  num_ents,
int *  tmp_procs,
EntityHandle tmp_handles 
)
private

Definition at line 1748 of file ParallelComm.cpp.

1759 {
1760  num_ents = 0;
1761  unsigned char pstat;
1762  ErrorCode result = get_sharing_data( entity, tmp_procs, tmp_handles, pstat, num_ents );MB_CHK_SET_ERR( result, "Failed to get sharing data" );
1763  assert( pstat == pstatus );
1764 
1765  // Build shared proc/handle lists
1766  // Start with multi-shared, since if it is the owner will be first
1767  if( pstatus & PSTATUS_MULTISHARED )
1768  {
1769  }
1770  else if( pstatus & PSTATUS_NOT_OWNED )
1771  {
1772  // If not multishared and not owned, other sharing proc is owner, put that
1773  // one first
1774  assert( "If not owned, I should be shared too" && pstatus & PSTATUS_SHARED && 1 == num_ents );
1775  tmp_procs[1] = procConfig.proc_rank();
1776  tmp_handles[1] = entity;
1777  num_ents = 2;
1778  }
1779  else if( pstatus & PSTATUS_SHARED )
1780  {
1781  // If not multishared and owned, I'm owner
1782  assert( "shared and owned, should be only 1 sharing proc" && 1 == num_ents );
1783  tmp_procs[1] = tmp_procs[0];
1784  tmp_procs[0] = procConfig.proc_rank();
1785  tmp_handles[1] = tmp_handles[0];
1786  tmp_handles[0] = entity;
1787  num_ents = 2;
1788  }
1789  else
1790  {
1791  // Not shared yet, just add owner (me)
1792  tmp_procs[0] = procConfig.proc_rank();
1793  tmp_handles[0] = entity;
1794  num_ents = 1;
1795  }
1796 
1797 #ifndef NDEBUG
1798  int tmp_ps = num_ents;
1799 #endif
1800 
1801  // Now add others, with zero handle for now
1802  for( std::set< unsigned int >::iterator sit = procs.begin(); sit != procs.end(); ++sit )
1803  {
1804 #ifndef NDEBUG
1805  if( tmp_ps && std::find( tmp_procs, tmp_procs + tmp_ps, *sit ) != tmp_procs + tmp_ps )
1806  {
1807  std::cerr << "Trouble with something already in shared list on proc " << procConfig.proc_rank()
1808  << ". Entity:" << std::endl;
1809  list_entities( &entity, 1 );
1810  std::cerr << "pstatus = " << (int)pstatus << ", sharedp = " << sharedp << std::endl;
1811  std::cerr << "tmp_ps = ";
1812  for( int i = 0; i < tmp_ps; i++ )
1813  std::cerr << tmp_procs[i] << " ";
1814  std::cerr << std::endl;
1815  std::cerr << "procs = ";
1816  for( std::set< unsigned int >::iterator sit2 = procs.begin(); sit2 != procs.end(); ++sit2 )
1817  std::cerr << *sit2 << " ";
1818  assert( false );
1819  }
1820 #endif
1821  tmp_procs[num_ents] = *sit;
1822  tmp_handles[num_ents] = 0;
1823  num_ents++;
1824  }
1825 
1826  // Put -1 after procs and 0 after handles
1827  if( MAX_SHARING_PROCS > num_ents )
1828  {
1829  tmp_procs[num_ents] = -1;
1830  tmp_handles[num_ents] = 0;
1831  }
1832 
1833  return MB_SUCCESS;
1834 }

References ErrorCode, get_sharing_data(), list_entities(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, moab::ProcConfig::proc_rank(), procConfig, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, and PSTATUS_SHARED.

Referenced by pack_entities().

◆ check_all_shared_handles() [1/2]

ErrorCode moab::ParallelComm::check_all_shared_handles ( bool  print_em = false)

Call exchange_all_shared_handles, then compare the results with tag data on local shared entities.

Definition at line 8541 of file ParallelComm.cpp.

8542 {
8543  // Get all shared ent data from other procs
8544  std::vector< std::vector< SharedEntityData > > shents( buffProcs.size() ), send_data( buffProcs.size() );
8545 
8546  ErrorCode result;
8547  bool done = false;
8548 
8549  while( !done )
8550  {
8551  result = check_local_shared();
8552  if( MB_SUCCESS != result )
8553  {
8554  done = true;
8555  continue;
8556  }
8557 
8558  result = pack_shared_handles( send_data );
8559  if( MB_SUCCESS != result )
8560  {
8561  done = true;
8562  continue;
8563  }
8564 
8565  result = exchange_all_shared_handles( send_data, shents );
8566  if( MB_SUCCESS != result )
8567  {
8568  done = true;
8569  continue;
8570  }
8571 
8572  if( !shents.empty() ) result = check_my_shared_handles( shents );
8573  done = true;
8574  }
8575 
8576  if( MB_SUCCESS != result && print_em )
8577  {
8578 #ifdef MOAB_HAVE_HDF5
8579  std::ostringstream ent_str;
8580  ent_str << "mesh." << procConfig.proc_rank() << ".h5m";
8581  mbImpl->write_mesh( ent_str.str().c_str() );
8582 #endif
8583  }
8584 
8585  return result;
8586 }

References buffProcs, check_local_shared(), check_my_shared_handles(), ErrorCode, exchange_all_shared_handles(), MB_SUCCESS, mbImpl, pack_shared_handles(), moab::ProcConfig::proc_rank(), procConfig, and moab::Interface::write_mesh().

Referenced by exchange_ghost_cells(), main(), resolve_shared_ents(), and moab::ScdInterface::tag_shared_vertices().

◆ check_all_shared_handles() [2/2]

ErrorCode moab::ParallelComm::check_all_shared_handles ( ParallelComm **  pcs,
int  num_pcs 
)
static

Definition at line 8715 of file ParallelComm.cpp.

8716 {
8717  std::vector< std::vector< std::vector< SharedEntityData > > > shents, send_data;
8718  ErrorCode result = MB_SUCCESS, tmp_result;
8719 
8720  // Get all shared ent data from each proc to all other procs
8721  send_data.resize( num_pcs );
8722  for( int p = 0; p < num_pcs; p++ )
8723  {
8724  tmp_result = pcs[p]->pack_shared_handles( send_data[p] );
8725  if( MB_SUCCESS != tmp_result ) result = tmp_result;
8726  }
8727  if( MB_SUCCESS != result ) return result;
8728 
8729  // Move the data sorted by sending proc to data sorted by receiving proc
8730  shents.resize( num_pcs );
8731  for( int p = 0; p < num_pcs; p++ )
8732  shents[p].resize( pcs[p]->buffProcs.size() );
8733 
8734  for( int p = 0; p < num_pcs; p++ )
8735  {
8736  for( unsigned int idx_p = 0; idx_p < pcs[p]->buffProcs.size(); idx_p++ )
8737  {
8738  // Move send_data[p][to_p] to shents[to_p][idx_p]
8739  int to_p = pcs[p]->buffProcs[idx_p];
8740  int top_idx_p = pcs[to_p]->get_buffers( p );
8741  assert( -1 != top_idx_p );
8742  shents[to_p][top_idx_p] = send_data[p][idx_p];
8743  }
8744  }
8745 
8746  for( int p = 0; p < num_pcs; p++ )
8747  {
8748  std::ostringstream ostr;
8749  ostr << "Processor " << p << " bad entities:";
8750  tmp_result = pcs[p]->check_my_shared_handles( shents[p], ostr.str().c_str() );
8751  if( MB_SUCCESS != tmp_result ) result = tmp_result;
8752  }
8753 
8754  return result;
8755 }

References buffProcs, check_my_shared_handles(), ErrorCode, get_buffers(), MB_SUCCESS, and pack_shared_handles().

◆ check_clean_iface()

ErrorCode moab::ParallelComm::check_clean_iface ( Range allsent)
private

Definition at line 6256 of file ParallelComm.cpp.

6257 {
6258  // allsent is all entities I think are on interface; go over them, looking
6259  // for zero-valued handles, and fix any I find
6260 
6261  // Keep lists of entities for which teh sharing data changed, grouped
6262  // by set of sharing procs.
6263  typedef std::map< ProcList, Range > procmap_t;
6264  procmap_t old_procs, new_procs;
6265 
6266  ErrorCode result = MB_SUCCESS;
6267  Range::iterator rit;
6269  unsigned char pstatus;
6270  int nump;
6271  ProcList sharedp;
6273  for( rvit = allsent.rbegin(); rvit != allsent.rend(); ++rvit )
6274  {
6275  result = get_sharing_data( *rvit, sharedp.procs, sharedh, pstatus, nump );MB_CHK_SET_ERR( result, "Failed to get sharing data" );
6276  assert( "Should be shared with at least one other proc" &&
6277  ( nump > 1 || sharedp.procs[0] != (int)procConfig.proc_rank() ) );
6278  assert( nump == MAX_SHARING_PROCS || sharedp.procs[nump] == -1 );
6279 
6280  // Look for first null handle in list
6281  int idx = std::find( sharedh, sharedh + nump, (EntityHandle)0 ) - sharedh;
6282  if( idx == nump ) continue; // All handles are valid
6283 
6284  ProcList old_list( sharedp );
6285  std::sort( old_list.procs, old_list.procs + nump );
6286  old_procs[old_list].insert( *rvit );
6287 
6288  // Remove null handles and corresponding proc ranks from lists
6289  int new_nump = idx;
6290  bool removed_owner = !idx;
6291  for( ++idx; idx < nump; ++idx )
6292  {
6293  if( sharedh[idx] )
6294  {
6295  sharedh[new_nump] = sharedh[idx];
6296  sharedp.procs[new_nump] = sharedp.procs[idx];
6297  ++new_nump;
6298  }
6299  }
6300  sharedp.procs[new_nump] = -1;
6301 
6302  if( removed_owner && new_nump > 1 )
6303  {
6304  // The proc that we choose as the entity owner isn't sharing the
6305  // entity (doesn't have a copy of it). We need to pick a different
6306  // owner. Choose the proc with lowest rank.
6307  idx = std::min_element( sharedp.procs, sharedp.procs + new_nump ) - sharedp.procs;
6308  std::swap( sharedp.procs[0], sharedp.procs[idx] );
6309  std::swap( sharedh[0], sharedh[idx] );
6310  if( sharedp.procs[0] == (int)proc_config().proc_rank() ) pstatus &= ~PSTATUS_NOT_OWNED;
6311  }
6312 
6313  result = set_sharing_data( *rvit, pstatus, nump, new_nump, sharedp.procs, sharedh );MB_CHK_SET_ERR( result, "Failed to set sharing data in check_clean_iface" );
6314 
6315  if( new_nump > 1 )
6316  {
6317  if( new_nump == 2 )
6318  {
6319  if( sharedp.procs[1] != (int)proc_config().proc_rank() )
6320  {
6321  assert( sharedp.procs[0] == (int)proc_config().proc_rank() );
6322  sharedp.procs[0] = sharedp.procs[1];
6323  }
6324  sharedp.procs[1] = -1;
6325  }
6326  else
6327  {
6328  std::sort( sharedp.procs, sharedp.procs + new_nump );
6329  }
6330  new_procs[sharedp].insert( *rvit );
6331  }
6332  }
6333 
6334  if( old_procs.empty() )
6335  {
6336  assert( new_procs.empty() );
6337  return MB_SUCCESS;
6338  }
6339 
6340  // Update interface sets
6341  procmap_t::iterator pmit;
6342  // std::vector<unsigned char> pstatus_list;
6343  rit = interface_sets().begin();
6344  while( rit != interface_sets().end() )
6345  {
6346  result = get_sharing_data( *rit, sharedp.procs, sharedh, pstatus, nump );MB_CHK_SET_ERR( result, "Failed to get sharing data for interface set" );
6347  assert( nump != 2 );
6348  std::sort( sharedp.procs, sharedp.procs + nump );
6349  assert( nump == MAX_SHARING_PROCS || sharedp.procs[nump] == -1 );
6350 
6351  pmit = old_procs.find( sharedp );
6352  if( pmit != old_procs.end() )
6353  {
6354  result = mbImpl->remove_entities( *rit, pmit->second );MB_CHK_SET_ERR( result, "Failed to remove entities from interface set" );
6355  }
6356 
6357  pmit = new_procs.find( sharedp );
6358  if( pmit == new_procs.end() )
6359  {
6360  int count;
6361  result = mbImpl->get_number_entities_by_handle( *rit, count );MB_CHK_SET_ERR( result, "Failed to get number of entities in interface set" );
6362  if( !count )
6363  {
6364  result = mbImpl->delete_entities( &*rit, 1 );MB_CHK_SET_ERR( result, "Failed to delete entities from interface set" );
6365  rit = interface_sets().erase( rit );
6366  }
6367  else
6368  {
6369  ++rit;
6370  }
6371  }
6372  else
6373  {
6374  result = mbImpl->add_entities( *rit, pmit->second );MB_CHK_SET_ERR( result, "Failed to add entities to interface set" );
6375 
6376  // Remove those that we've processed so that we know which ones
6377  // are new.
6378  new_procs.erase( pmit );
6379  ++rit;
6380  }
6381  }
6382 
6383  // Create interface sets for new proc id combinations
6384  std::fill( sharedh, sharedh + MAX_SHARING_PROCS, 0 );
6385  for( pmit = new_procs.begin(); pmit != new_procs.end(); ++pmit )
6386  {
6387  EntityHandle new_set;
6388  result = mbImpl->create_meshset( MESHSET_SET, new_set );MB_CHK_SET_ERR( result, "Failed to create interface set" );
6389  interfaceSets.insert( new_set );
6390 
6391  // Add entities
6392  result = mbImpl->add_entities( new_set, pmit->second );MB_CHK_SET_ERR( result, "Failed to add entities to interface set" );
6393  // Tag set with the proc rank(s)
6394  assert( pmit->first.procs[0] >= 0 );
6395  pstatus = PSTATUS_SHARED | PSTATUS_INTERFACE;
6396  if( pmit->first.procs[1] == -1 )
6397  {
6398  int other = pmit->first.procs[0];
6399  assert( other != (int)procConfig.proc_rank() );
6400  result = mbImpl->tag_set_data( sharedp_tag(), &new_set, 1, pmit->first.procs );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
6401  sharedh[0] = 0;
6402  result = mbImpl->tag_set_data( sharedh_tag(), &new_set, 1, sharedh );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
6403  if( other < (int)proc_config().proc_rank() ) pstatus |= PSTATUS_NOT_OWNED;
6404  }
6405  else
6406  {
6407  result = mbImpl->tag_set_data( sharedps_tag(), &new_set, 1, pmit->first.procs );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
6408  result = mbImpl->tag_set_data( sharedhs_tag(), &new_set, 1, sharedh );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
6409  pstatus |= PSTATUS_MULTISHARED;
6410  if( pmit->first.procs[0] < (int)proc_config().proc_rank() ) pstatus |= PSTATUS_NOT_OWNED;
6411  }
6412 
6413  result = mbImpl->tag_set_data( pstatus_tag(), &new_set, 1, &pstatus );MB_CHK_SET_ERR( result, "Failed to tag interface set with pstatus" );
6414 
6415  // Set pstatus on all interface entities in set
6416  result = mbImpl->tag_clear_data( pstatus_tag(), pmit->second, &pstatus );MB_CHK_SET_ERR( result, "Failed to tag interface entities with pstatus" );
6417  }
6418 
6419  return MB_SUCCESS;
6420 }

References moab::Interface::add_entities(), moab::Range::begin(), moab::Interface::create_meshset(), moab::Interface::delete_entities(), moab::Range::erase(), ErrorCode, moab::Interface::get_number_entities_by_handle(), get_sharing_data(), moab::Range::insert(), interface_sets(), interfaceSets, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, MESHSET_SET, proc_config(), moab::ProcConfig::proc_rank(), procConfig, moab::ProcList::procs, PSTATUS_INTERFACE, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, pstatus_tag(), moab::Range::rbegin(), moab::Interface::remove_entities(), moab::Range::rend(), set_sharing_data(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), moab::Interface::tag_clear_data(), and moab::Interface::tag_set_data().

Referenced by exchange_ghost_cells().

◆ check_global_ids()

ErrorCode moab::ParallelComm::check_global_ids ( EntityHandle  this_set,
const int  dimension,
const int  start_id = 1,
const bool  largest_dim_only = true,
const bool  parallel = true,
const bool  owned_only = false 
)

check for global ids; based only on tag handle being there or not; if it's not there, create them for the specified dimensions

Parameters
owned_onlyIf true, do not get global IDs for non-owned entities from remote processors.

Definition at line 5532 of file ParallelComm.cpp.

5538 {
5539  // Global id tag
5540  Tag gid_tag = mbImpl->globalId_tag();
5541  int def_val = -1;
5542  Range dum_range;
5543 
5544  void* tag_ptr = &def_val;
5545  ErrorCode result = mbImpl->get_entities_by_type_and_tag( this_set, MBVERTEX, &gid_tag, &tag_ptr, 1, dum_range );MB_CHK_SET_ERR( result, "Failed to get entities by MBVERTEX type and gid tag" );
5546 
5547  if( !dum_range.empty() )
5548  {
5549  // Just created it, so we need global ids
5550  result = assign_global_ids( this_set, dimension, start_id, largest_dim_only, parallel, owned_only );MB_CHK_SET_ERR( result, "Failed assigning global ids" );
5551  }
5552 
5553  return MB_SUCCESS;
5554 }

References assign_global_ids(), moab::Range::empty(), ErrorCode, moab::Interface::get_entities_by_type_and_tag(), moab::Interface::globalId_tag(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, and MBVERTEX.

Referenced by moab::ReadParallel::load_file().

◆ check_local_shared()

ErrorCode moab::ParallelComm::check_local_shared ( )

Definition at line 8588 of file ParallelComm.cpp.

8589 {
8590  // Do some checks on shared entities to make sure things look
8591  // consistent
8592 
8593  // Check that non-vertex shared entities are shared by same procs as all
8594  // their vertices
8595  // std::pair<Range::const_iterator,Range::const_iterator> vert_it =
8596  // sharedEnts.equal_range(MBVERTEX);
8597  std::vector< EntityHandle > dum_connect;
8598  const EntityHandle* connect;
8599  int num_connect;
8600  int tmp_procs[MAX_SHARING_PROCS];
8602  std::set< int > tmp_set, vset;
8603  int num_ps;
8604  ErrorCode result;
8605  unsigned char pstat;
8606  std::vector< EntityHandle > bad_ents;
8607  std::vector< std::string > errors;
8608 
8609  std::set< EntityHandle >::iterator vit;
8610  for( vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit )
8611  {
8612  // Get sharing procs for this ent
8613  result = get_sharing_data( *vit, tmp_procs, tmp_hs, pstat, num_ps );
8614  if( MB_SUCCESS != result )
8615  {
8616  bad_ents.push_back( *vit );
8617  errors.push_back( std::string( "Failure getting sharing data." ) );
8618  continue;
8619  }
8620 
8621  bool bad = false;
8622  // Entity must be shared
8623  if( !( pstat & PSTATUS_SHARED ) )
8624  errors.push_back( std::string( "Entity should be shared but isn't." ) ), bad = true;
8625 
8626  // If entity is not owned this must not be first proc
8627  if( pstat & PSTATUS_NOT_OWNED && tmp_procs[0] == (int)procConfig.proc_rank() )
8628  errors.push_back( std::string( "Entity not owned but is first proc." ) ), bad = true;
8629 
8630  // If entity is owned and multishared, this must be first proc
8631  if( !( pstat & PSTATUS_NOT_OWNED ) && pstat & PSTATUS_MULTISHARED &&
8632  ( tmp_procs[0] != (int)procConfig.proc_rank() || tmp_hs[0] != *vit ) )
8633  errors.push_back( std::string( "Entity owned and multishared but not first proc or not first handle." ) ),
8634  bad = true;
8635 
8636  if( bad )
8637  {
8638  bad_ents.push_back( *vit );
8639  continue;
8640  }
8641 
8642  EntityType type = mbImpl->type_from_handle( *vit );
8643  if( type == MBVERTEX || type == MBENTITYSET ) continue;
8644 
8645  // Copy element's procs to vset and save size
8646  int orig_ps = num_ps;
8647  vset.clear();
8648  std::copy( tmp_procs, tmp_procs + num_ps, std::inserter( vset, vset.begin() ) );
8649 
8650  // Get vertices for this ent and intersection of sharing procs
8651  result = mbImpl->get_connectivity( *vit, connect, num_connect, false, &dum_connect );
8652  if( MB_SUCCESS != result )
8653  {
8654  bad_ents.push_back( *vit );
8655  errors.push_back( std::string( "Failed to get connectivity." ) );
8656  continue;
8657  }
8658 
8659  for( int i = 0; i < num_connect; i++ )
8660  {
8661  result = get_sharing_data( connect[i], tmp_procs, NULL, pstat, num_ps );
8662  if( MB_SUCCESS != result )
8663  {
8664  bad_ents.push_back( *vit );
8665  continue;
8666  }
8667  if( !num_ps )
8668  {
8669  vset.clear();
8670  break;
8671  }
8672  std::sort( tmp_procs, tmp_procs + num_ps );
8673  tmp_set.clear();
8674  std::set_intersection( tmp_procs, tmp_procs + num_ps, vset.begin(), vset.end(),
8675  std::inserter( tmp_set, tmp_set.end() ) );
8676  vset.swap( tmp_set );
8677  if( vset.empty() ) break;
8678  }
8679 
8680  // Intersect them; should be the same size as orig_ps
8681  tmp_set.clear();
8682  std::set_intersection( tmp_procs, tmp_procs + num_ps, vset.begin(), vset.end(),
8683  std::inserter( tmp_set, tmp_set.end() ) );
8684  if( orig_ps != (int)tmp_set.size() )
8685  {
8686  errors.push_back( std::string( "Vertex proc set not same size as entity proc set." ) );
8687  bad_ents.push_back( *vit );
8688  for( int i = 0; i < num_connect; i++ )
8689  {
8690  bad_ents.push_back( connect[i] );
8691  errors.push_back( std::string( "vertex in connect" ) );
8692  }
8693  }
8694  }
8695 
8696  if( !bad_ents.empty() )
8697  {
8698  std::cout << "Found bad entities in check_local_shared, proc rank " << procConfig.proc_rank() << ","
8699  << std::endl;
8700  std::vector< std::string >::iterator sit;
8701  std::vector< EntityHandle >::iterator rit;
8702  for( rit = bad_ents.begin(), sit = errors.begin(); rit != bad_ents.end(); ++rit, ++sit )
8703  {
8704  list_entities( &( *rit ), 1 );
8705  std::cout << "Reason: " << *sit << std::endl;
8706  }
8707  return MB_FAILURE;
8708  }
8709 
8710  // To do: check interface sets
8711 
8712  return MB_SUCCESS;
8713 }

References ErrorCode, moab::Interface::get_connectivity(), get_sharing_data(), list_entities(), MAX_SHARING_PROCS, MB_SUCCESS, MBENTITYSET, mbImpl, MBVERTEX, moab::ProcConfig::proc_rank(), procConfig, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, sharedEnts, and moab::Interface::type_from_handle().

Referenced by check_all_shared_handles().

◆ check_my_shared_handles()

ErrorCode moab::ParallelComm::check_my_shared_handles ( std::vector< std::vector< SharedEntityData > > &  shents,
const char *  prefix = NULL 
)

Definition at line 8757 of file ParallelComm.cpp.

8759 {
8760  // Now check against what I think data should be
8761  // Get all shared entities
8762  ErrorCode result;
8763  Range all_shared;
8764  std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( all_shared ) );
8765  std::vector< EntityHandle > dum_vec;
8766  all_shared.erase( all_shared.upper_bound( MBPOLYHEDRON ), all_shared.end() );
8767 
8768  Range bad_ents, local_shared;
8769  std::vector< SharedEntityData >::iterator vit;
8770  unsigned char tmp_pstat;
8771  for( unsigned int i = 0; i < shents.size(); i++ )
8772  {
8773  int other_proc = buffProcs[i];
8774  result = get_shared_entities( other_proc, local_shared );
8775  if( MB_SUCCESS != result ) return result;
8776  for( vit = shents[i].begin(); vit != shents[i].end(); ++vit )
8777  {
8778  EntityHandle localh = vit->local, remoteh = vit->remote, dumh;
8779  local_shared.erase( localh );
8780  result = get_remote_handles( true, &localh, &dumh, 1, other_proc, dum_vec );
8781  if( MB_SUCCESS != result || dumh != remoteh ) bad_ents.insert( localh );
8782  result = get_pstatus( localh, tmp_pstat );
8783  if( MB_SUCCESS != result || ( !( tmp_pstat & PSTATUS_NOT_OWNED ) && (unsigned)vit->owner != rank() ) ||
8784  ( tmp_pstat & PSTATUS_NOT_OWNED && (unsigned)vit->owner == rank() ) )
8785  bad_ents.insert( localh );
8786  }
8787 
8788  if( !local_shared.empty() ) bad_ents.merge( local_shared );
8789  }
8790 
8791  if( !bad_ents.empty() )
8792  {
8793  if( prefix ) std::cout << prefix << std::endl;
8794  list_entities( bad_ents );
8795  return MB_FAILURE;
8796  }
8797  else
8798  return MB_SUCCESS;
8799 }

References buffProcs, moab::Range::empty(), moab::Range::end(), moab::Range::erase(), ErrorCode, get_pstatus(), get_remote_handles(), get_shared_entities(), moab::Range::insert(), list_entities(), MB_SUCCESS, MBPOLYHEDRON, moab::Range::merge(), PSTATUS_NOT_OWNED, rank(), sharedEnts, and moab::Range::upper_bound().

Referenced by check_all_shared_handles().

◆ check_sent_ents()

ErrorCode moab::ParallelComm::check_sent_ents ( Range allsent)
private

check entities to make sure there are no zero-valued remote handles where they shouldn't be

Definition at line 7323 of file ParallelComm.cpp.

7324 {
7325  // Check entities to make sure there are no zero-valued remote handles
7326  // where they shouldn't be
7327  std::vector< unsigned char > pstat( allsent.size() );
7328  ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), allsent, &pstat[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" );
7329  std::vector< EntityHandle > handles( allsent.size() );
7330  result = mbImpl->tag_get_data( sharedh_tag(), allsent, &handles[0] );MB_CHK_SET_ERR( result, "Failed to get sharedh tag data" );
7331  std::vector< int > procs( allsent.size() );
7332  result = mbImpl->tag_get_data( sharedp_tag(), allsent, &procs[0] );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" );
7333 
7334  Range bad_entities;
7335 
7336  Range::iterator rit;
7337  unsigned int i;
7339  int dum_ps[MAX_SHARING_PROCS];
7340 
7341  for( rit = allsent.begin(), i = 0; rit != allsent.end(); ++rit, i++ )
7342  {
7343  if( -1 != procs[i] && 0 == handles[i] )
7344  bad_entities.insert( *rit );
7345  else
7346  {
7347  // Might be multi-shared...
7348  result = mbImpl->tag_get_data( sharedps_tag(), &( *rit ), 1, dum_ps );
7349  if( MB_TAG_NOT_FOUND == result )
7350  continue;
7351  else if( MB_SUCCESS != result )
7352  MB_SET_ERR( result, "Failed to get sharedps tag data" );
7353  result = mbImpl->tag_get_data( sharedhs_tag(), &( *rit ), 1, dum_hs );MB_CHK_SET_ERR( result, "Failed to get sharedhs tag data" );
7354 
7355  // Find first non-set proc
7356  int* ns_proc = std::find( dum_ps, dum_ps + MAX_SHARING_PROCS, -1 );
7357  int num_procs = ns_proc - dum_ps;
7358  assert( num_procs <= MAX_SHARING_PROCS );
7359  // Now look for zero handles in active part of dum_hs
7360  EntityHandle* ns_handle = std::find( dum_hs, dum_hs + num_procs, 0 );
7361  int num_handles = ns_handle - dum_hs;
7362  assert( num_handles <= num_procs );
7363  if( num_handles != num_procs ) bad_entities.insert( *rit );
7364  }
7365  }
7366 
7367  return MB_SUCCESS;
7368 }

References moab::Range::begin(), moab::Range::end(), ErrorCode, moab::Range::insert(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, MB_TAG_NOT_FOUND, mbImpl, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), moab::Range::size(), and moab::Interface::tag_get_data().

Referenced by exchange_ghost_cells(), and exchange_owned_mesh().

◆ clean_shared_tags()

ErrorCode moab::ParallelComm::clean_shared_tags ( std::vector< Range * > &  exchange_ents)

Definition at line 8842 of file ParallelComm.cpp.

8843 {
8844  for( unsigned int i = 0; i < exchange_ents.size(); i++ )
8845  {
8846  Range* ents = exchange_ents[i];
8847  int num_ents = ents->size();
8848  Range::iterator it = ents->begin();
8849 
8850  for( int n = 0; n < num_ents; n++ )
8851  {
8852  int sharing_proc;
8853  ErrorCode result = mbImpl->tag_get_data( sharedp_tag(), &( *ents->begin() ), 1, &sharing_proc );
8854  if( result != MB_TAG_NOT_FOUND && sharing_proc == -1 )
8855  {
8856  result = mbImpl->tag_delete_data( sharedp_tag(), &( *it ), 1 );MB_CHK_SET_ERR( result, "Failed to delete sharedp tag data" );
8857  result = mbImpl->tag_delete_data( sharedh_tag(), &( *it ), 1 );MB_CHK_SET_ERR( result, "Failed to delete sharedh tag data" );
8858  result = mbImpl->tag_delete_data( pstatus_tag(), &( *it ), 1 );MB_CHK_SET_ERR( result, "Failed to delete pstatus tag data" );
8859  }
8860  ++it;
8861  }
8862  }
8863 
8864  return MB_SUCCESS;
8865 }

References moab::Range::begin(), ErrorCode, MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_NOT_FOUND, mbImpl, pstatus_tag(), sharedh_tag(), sharedp_tag(), moab::Range::size(), moab::Interface::tag_delete_data(), and moab::Interface::tag_get_data().

◆ collective_sync_partition()

ErrorCode moab::ParallelComm::collective_sync_partition ( )

Definition at line 8268 of file ParallelComm.cpp.

8269 {
8270  int count = partition_sets().size();
8271  globalPartCount = 0;
8272  int err = MPI_Allreduce( &count, &globalPartCount, 1, MPI_INT, MPI_SUM, proc_config().proc_comm() );
8273  return err ? MB_FAILURE : MB_SUCCESS;
8274 }

References globalPartCount, MB_SUCCESS, partition_sets(), proc_config(), and moab::Range::size().

◆ comm()

◆ correct_thin_ghost_layers()

ErrorCode moab::ParallelComm::correct_thin_ghost_layers ( )

Definition at line 9346 of file ParallelComm.cpp.

9347 {
9348 
9349  // Get all shared ent data from other procs
9350  std::vector< std::vector< SharedEntityData > > shents( buffProcs.size() ), send_data( buffProcs.size() );
9351 
9352  // will work only on multi-shared tags sharedps_tag(), sharedhs_tag();
9353 
9354  /*
9355  * domain0 | domain1 | domain2 | domain3
9356  * vertices from domain 1 and 2 are visible from both 0 and 3, but
9357  * domain 0 might not have info about multi-sharing from domain 3
9358  * so we will force that domain 0 vertices owned by 1 and 2 have information
9359  * about the domain 3 sharing
9360  *
9361  * SharedEntityData will have :
9362  * struct SharedEntityData {
9363  EntityHandle local; // this is same meaning, for the proc we sent to, it is local
9364  EntityHandle remote; // this will be the far away handle that will need to be added
9365  EntityID owner; // this will be the remote proc
9366  };
9367  // so we need to add data like this:
9368  a multishared entity owned by proc x will have data like
9369  multishared procs: proc x, a, b, c
9370  multishared handles: h1, h2, h3, h4
9371  we will need to send data from proc x like this:
9372  to proc a we will send
9373  (h2, h3, b), (h2, h4, c)
9374  to proc b we will send
9375  (h3, h2, a), (h3, h4, c)
9376  to proc c we will send
9377  (h4, h2, a), (h4, h3, b)
9378  *
9379  */
9380 
9381  ErrorCode result = MB_SUCCESS;
9382  int ent_procs[MAX_SHARING_PROCS + 1];
9383  EntityHandle handles[MAX_SHARING_PROCS + 1];
9384  int num_sharing;
9385  SharedEntityData tmp;
9386 
9387  for( std::set< EntityHandle >::iterator i = sharedEnts.begin(); i != sharedEnts.end(); ++i )
9388  {
9389 
9390  unsigned char pstat;
9391  result = get_sharing_data( *i, ent_procs, handles, pstat, num_sharing );MB_CHK_SET_ERR( result, "can't get sharing data" );
9392  if( !( pstat & PSTATUS_MULTISHARED ) ||
9393  num_sharing <= 2 ) // if not multishared, skip, it should have no problems
9394  continue;
9395  // we should skip the ones that are not owned locally
9396  // the owned ones will have the most multi-shared info, because the info comes from other
9397  // remote processors
9398  if( pstat & PSTATUS_NOT_OWNED ) continue;
9399  for( int j = 1; j < num_sharing; j++ )
9400  {
9401  // we will send to proc
9402  int send_to_proc = ent_procs[j]; //
9403  tmp.local = handles[j];
9404  int ind = get_buffers( send_to_proc );
9405  assert( -1 != ind ); // THIS SHOULD NEVER HAPPEN
9406  for( int k = 1; k < num_sharing; k++ )
9407  {
9408  // do not send to self proc
9409  if( j == k ) continue;
9410  tmp.remote = handles[k]; // this will be the handle of entity on proc
9411  tmp.owner = ent_procs[k];
9412  send_data[ind].push_back( tmp );
9413  }
9414  }
9415  }
9416 
9417  result = exchange_all_shared_handles( send_data, shents );MB_CHK_ERR( result );
9418 
9419  // loop over all shents and add if vertex type, add if missing
9420  for( size_t i = 0; i < shents.size(); i++ )
9421  {
9422  std::vector< SharedEntityData >& shEnts = shents[i];
9423  for( size_t j = 0; j < shEnts.size(); j++ )
9424  {
9425  tmp = shEnts[j];
9426  // basically, check the shared data for tmp.local entity
9427  // it should have inside the tmp.owner and tmp.remote
9428  EntityHandle eh = tmp.local;
9429  unsigned char pstat;
9430  result = get_sharing_data( eh, ent_procs, handles, pstat, num_sharing );MB_CHK_SET_ERR( result, "can't get sharing data" );
9431  // see if the proc tmp.owner is in the list of ent_procs; if not, we have to increase
9432  // handles, and ent_procs; and set
9433 
9434  int proc_remote = tmp.owner; //
9435  if( std::find( ent_procs, ent_procs + num_sharing, proc_remote ) == ent_procs + num_sharing )
9436  {
9437  // so we did not find on proc
9438 #ifndef NDEBUG
9439  if( myDebug->get_verbosity() == 3 )
9440  std::cout << "THIN GHOST: we did not find on proc " << rank() << " for shared ent " << eh
9441  << " the proc " << proc_remote << "\n";
9442 #endif
9443  // increase num_sharing, and set the multi-shared tags
9444  if( num_sharing >= MAX_SHARING_PROCS ) return MB_FAILURE;
9445  handles[num_sharing] = tmp.remote;
9446  handles[num_sharing + 1] = 0; // end of list
9447  ent_procs[num_sharing] = tmp.owner;
9448  ent_procs[num_sharing + 1] = -1; // this should be already set
9449  result = mbImpl->tag_set_data( sharedps_tag(), &eh, 1, ent_procs );MB_CHK_SET_ERR( result, "Failed to set sharedps tag data" );
9450  result = mbImpl->tag_set_data( sharedhs_tag(), &eh, 1, handles );MB_CHK_SET_ERR( result, "Failed to set sharedhs tag data" );
9451  if( 2 == num_sharing ) // it means the sharedp and sharedh tags were set with a
9452  // value non default
9453  {
9454  // so entity eh was simple shared before, we need to set those dense tags back
9455  // to default
9456  // values
9457  EntityHandle zero = 0;
9458  int no_proc = -1;
9459  result = mbImpl->tag_set_data( sharedp_tag(), &eh, 1, &no_proc );MB_CHK_SET_ERR( result, "Failed to set sharedp tag data" );
9460  result = mbImpl->tag_set_data( sharedh_tag(), &eh, 1, &zero );MB_CHK_SET_ERR( result, "Failed to set sharedh tag data" );
9461  // also, add multishared pstatus tag
9462  // also add multishared status to pstatus
9463  pstat = pstat | PSTATUS_MULTISHARED;
9464  result = mbImpl->tag_set_data( pstatus_tag(), &eh, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to set pstatus tag data" );
9465  }
9466  }
9467  }
9468  }
9469  return MB_SUCCESS;
9470 }

References buffProcs, ErrorCode, exchange_all_shared_handles(), get_buffers(), get_sharing_data(), moab::DebugOutput::get_verbosity(), moab::ParallelComm::SharedEntityData::local, MAX_SHARING_PROCS, MB_CHK_ERR, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, myDebug, moab::ParallelComm::SharedEntityData::owner, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, pstatus_tag(), rank(), moab::ParallelComm::SharedEntityData::remote, sharedEnts, sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), and moab::Interface::tag_set_data().

Referenced by moab::ReadParallel::load_file().

◆ create_iface_pc_links()

ErrorCode moab::ParallelComm::create_iface_pc_links ( )
private

Definition at line 5093 of file ParallelComm.cpp.

5094 {
5095  // Now that we've resolved the entities in the iface sets,
5096  // set parent/child links between the iface sets
5097 
5098  // First tag all entities in the iface sets
5099  Tag tmp_iface_tag;
5100  EntityHandle tmp_iface_set = 0;
5101  ErrorCode result = mbImpl->tag_get_handle( "__tmp_iface", 1, MB_TYPE_HANDLE, tmp_iface_tag,
5102  MB_TAG_DENSE | MB_TAG_CREAT, &tmp_iface_set );MB_CHK_SET_ERR( result, "Failed to create temporary interface set tag" );
5103 
5104  Range iface_ents;
5105  std::vector< EntityHandle > tag_vals;
5106  Range::iterator rit;
5107 
5108  for( rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit )
5109  {
5110  // tag entities with interface set
5111  iface_ents.clear();
5112  result = mbImpl->get_entities_by_handle( *rit, iface_ents );MB_CHK_SET_ERR( result, "Failed to get entities in interface set" );
5113 
5114  if( iface_ents.empty() ) continue;
5115 
5116  tag_vals.resize( iface_ents.size() );
5117  std::fill( tag_vals.begin(), tag_vals.end(), *rit );
5118  result = mbImpl->tag_set_data( tmp_iface_tag, iface_ents, &tag_vals[0] );MB_CHK_SET_ERR( result, "Failed to tag iface entities with interface set" );
5119  }
5120 
5121  // Now go back through interface sets and add parent/child links
5122  Range tmp_ents2;
5123  for( int d = 2; d >= 0; d-- )
5124  {
5125  for( rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit )
5126  {
5127  // Get entities on this interface
5128  iface_ents.clear();
5129  result = mbImpl->get_entities_by_handle( *rit, iface_ents, true );MB_CHK_SET_ERR( result, "Failed to get entities by handle" );
5130  if( iface_ents.empty() || mbImpl->dimension_from_handle( *iface_ents.rbegin() ) != d ) continue;
5131 
5132  // Get higher-dimensional entities and their interface sets
5133  result = mbImpl->get_adjacencies( &( *iface_ents.begin() ), 1, d + 1, false, tmp_ents2 );MB_CHK_SET_ERR( result, "Failed to get adjacencies for interface sets" );
5134  tag_vals.resize( tmp_ents2.size() );
5135  result = mbImpl->tag_get_data( tmp_iface_tag, tmp_ents2, &tag_vals[0] );MB_CHK_SET_ERR( result, "Failed to get tmp iface tag for interface sets" );
5136 
5137  // Go through and for any on interface make it a parent
5138  EntityHandle last_set = 0;
5139  for( unsigned int i = 0; i < tag_vals.size(); i++ )
5140  {
5141  if( tag_vals[i] && tag_vals[i] != last_set )
5142  {
5143  result = mbImpl->add_parent_child( tag_vals[i], *rit );MB_CHK_SET_ERR( result, "Failed to add parent/child link for interface set" );
5144  last_set = tag_vals[i];
5145  }
5146  }
5147  }
5148  }
5149 
5150  // Delete the temporary tag
5151  result = mbImpl->tag_delete( tmp_iface_tag );MB_CHK_SET_ERR( result, "Failed to delete tmp iface tag" );
5152 
5153  return MB_SUCCESS;
5154 }

References moab::Interface::add_parent_child(), moab::Range::begin(), moab::Range::clear(), moab::Interface::dimension_from_handle(), moab::Range::empty(), moab::Range::end(), ErrorCode, moab::Interface::get_adjacencies(), moab::Interface::get_entities_by_handle(), interfaceSets, MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_DENSE, MB_TYPE_HANDLE, mbImpl, moab::Range::rbegin(), moab::Range::size(), moab::Interface::tag_delete(), moab::Interface::tag_get_data(), moab::Interface::tag_get_handle(), and moab::Interface::tag_set_data().

Referenced by resolve_shared_ents(), and moab::ParallelMergeMesh::TagSharedElements().

◆ create_interface_sets() [1/2]

ErrorCode moab::ParallelComm::create_interface_sets ( EntityHandle  this_set,
int  resolve_dim,
int  shared_dim 
)

Definition at line 4981 of file ParallelComm.cpp.

4982 {
4983  std::map< std::vector< int >, std::vector< EntityHandle > > proc_nvecs;
4984 
4985  // Build up the list of shared entities
4986  int procs[MAX_SHARING_PROCS];
4988  ErrorCode result;
4989  int nprocs;
4990  unsigned char pstat;
4991  for( std::set< EntityHandle >::iterator vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit )
4992  {
4993  if( shared_dim != -1 && mbImpl->dimension_from_handle( *vit ) > shared_dim ) continue;
4994  result = get_sharing_data( *vit, procs, handles, pstat, nprocs );MB_CHK_SET_ERR( result, "Failed to get sharing data" );
4995  std::sort( procs, procs + nprocs );
4996  std::vector< int > tmp_procs( procs, procs + nprocs );
4997  assert( tmp_procs.size() != 2 );
4998  proc_nvecs[tmp_procs].push_back( *vit );
4999  }
5000 
5001  Skinner skinner( mbImpl );
5002  Range skin_ents[4];
5003  result = mbImpl->get_entities_by_dimension( this_set, resolve_dim, skin_ents[resolve_dim] );MB_CHK_SET_ERR( result, "Failed to get skin entities by dimension" );
5004  result =
5005  skinner.find_skin( this_set, skin_ents[resolve_dim], false, skin_ents[resolve_dim - 1], 0, true, true, true );MB_CHK_SET_ERR( result, "Failed to find skin" );
5006  if( shared_dim > 1 )
5007  {
5008  result = mbImpl->get_adjacencies( skin_ents[resolve_dim - 1], resolve_dim - 2, true, skin_ents[resolve_dim - 2],
5009  Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get skin adjacencies" );
5010  }
5011 
5012  result = get_proc_nvecs( resolve_dim, shared_dim, skin_ents, proc_nvecs );
5013 
5014  return create_interface_sets( proc_nvecs );
5015 }

References create_interface_sets(), moab::Interface::dimension_from_handle(), ErrorCode, moab::Skinner::find_skin(), moab::Interface::get_adjacencies(), moab::Interface::get_entities_by_dimension(), get_proc_nvecs(), get_sharing_data(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, mbImpl, sharedEnts, and moab::Interface::UNION.

◆ create_interface_sets() [2/2]

ErrorCode moab::ParallelComm::create_interface_sets ( std::map< std::vector< int >, std::vector< EntityHandle > > &  proc_nvecs)

Definition at line 5017 of file ParallelComm.cpp.

5018 {
5019  if( proc_nvecs.empty() ) return MB_SUCCESS;
5020 
5021  int proc_ids[MAX_SHARING_PROCS];
5022  EntityHandle proc_handles[MAX_SHARING_PROCS];
5023  Tag shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag;
5024  ErrorCode result = get_shared_proc_tags( shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag );MB_CHK_SET_ERR( result, "Failed to get shared proc tags in create_interface_sets" );
5025  Range::iterator rit;
5026 
5027  // Create interface sets, tag them, and tag their contents with iface set tag
5028  std::vector< unsigned char > pstatus;
5029  for( std::map< std::vector< int >, std::vector< EntityHandle > >::iterator vit = proc_nvecs.begin();
5030  vit != proc_nvecs.end(); ++vit )
5031  {
5032  // Create the set
5033  EntityHandle new_set;
5034  result = mbImpl->create_meshset( MESHSET_SET, new_set );MB_CHK_SET_ERR( result, "Failed to create interface set" );
5035  interfaceSets.insert( new_set );
5036 
5037  // Add entities
5038  assert( !vit->second.empty() );
5039  result = mbImpl->add_entities( new_set, &( vit->second )[0], ( vit->second ).size() );MB_CHK_SET_ERR( result, "Failed to add entities to interface set" );
5040  // Tag set with the proc rank(s)
5041  if( vit->first.size() == 1 )
5042  {
5043  assert( ( vit->first )[0] != (int)procConfig.proc_rank() );
5044  result = mbImpl->tag_set_data( shp_tag, &new_set, 1, &( vit->first )[0] );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
5045  proc_handles[0] = 0;
5046  result = mbImpl->tag_set_data( shh_tag, &new_set, 1, proc_handles );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
5047  }
5048  else
5049  {
5050  // Pad tag data out to MAX_SHARING_PROCS with -1
5051  if( vit->first.size() > MAX_SHARING_PROCS )
5052  {
5053  std::cerr << "Exceeded MAX_SHARING_PROCS for " << CN::EntityTypeName( TYPE_FROM_HANDLE( new_set ) )
5054  << ' ' << ID_FROM_HANDLE( new_set ) << " on process " << proc_config().proc_rank()
5055  << std::endl;
5056  std::cerr.flush();
5057  MPI_Abort( proc_config().proc_comm(), 66 );
5058  }
5059  // assert(vit->first.size() <= MAX_SHARING_PROCS);
5060  std::copy( vit->first.begin(), vit->first.end(), proc_ids );
5061  std::fill( proc_ids + vit->first.size(), proc_ids + MAX_SHARING_PROCS, -1 );
5062  result = mbImpl->tag_set_data( shps_tag, &new_set, 1, proc_ids );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
5063  unsigned int ind = std::find( proc_ids, proc_ids + vit->first.size(), procConfig.proc_rank() ) - proc_ids;
5064  assert( ind < vit->first.size() );
5065  std::fill( proc_handles, proc_handles + MAX_SHARING_PROCS, 0 );
5066  proc_handles[ind] = new_set;
5067  result = mbImpl->tag_set_data( shhs_tag, &new_set, 1, proc_handles );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
5068  }
5069 
5070  // Get the owning proc, then set the pstatus tag on iface set
5071  int min_proc = ( vit->first )[0];
5072  unsigned char pval = ( PSTATUS_SHARED | PSTATUS_INTERFACE );
5073  if( min_proc < (int)procConfig.proc_rank() ) pval |= PSTATUS_NOT_OWNED;
5074  if( vit->first.size() > 1 ) pval |= PSTATUS_MULTISHARED;
5075  result = mbImpl->tag_set_data( pstat_tag, &new_set, 1, &pval );MB_CHK_SET_ERR( result, "Failed to tag interface set with pstatus" );
5076 
5077  // Tag the vertices with the same thing
5078  pstatus.clear();
5079  std::vector< EntityHandle > verts;
5080  for( std::vector< EntityHandle >::iterator v2it = ( vit->second ).begin(); v2it != ( vit->second ).end();
5081  ++v2it )
5082  if( mbImpl->type_from_handle( *v2it ) == MBVERTEX ) verts.push_back( *v2it );
5083  pstatus.resize( verts.size(), pval );
5084  if( !verts.empty() )
5085  {
5086  result = mbImpl->tag_set_data( pstat_tag, &verts[0], verts.size(), &pstatus[0] );MB_CHK_SET_ERR( result, "Failed to tag interface set vertices with pstatus" );
5087  }
5088  }
5089 
5090  return MB_SUCCESS;
5091 }

References moab::Interface::add_entities(), moab::Interface::create_meshset(), moab::CN::EntityTypeName(), ErrorCode, moab::GeomUtil::first(), get_shared_proc_tags(), moab::ID_FROM_HANDLE(), moab::Range::insert(), interfaceSets, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, MBVERTEX, MESHSET_SET, proc_config(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_INTERFACE, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, moab::Interface::tag_set_data(), moab::Interface::type_from_handle(), and moab::TYPE_FROM_HANDLE().

Referenced by create_interface_sets(), exchange_owned_meshs(), resolve_shared_ents(), moab::ScdInterface::tag_shared_vertices(), and moab::ParallelMergeMesh::TagSharedElements().

◆ create_part()

ErrorCode moab::ParallelComm::create_part ( EntityHandle part_out)

Definition at line 8210 of file ParallelComm.cpp.

8211 {
8212  // Mark as invalid so we know that it needs to be updated
8213  globalPartCount = -1;
8214 
8215  // Create set representing part
8216  ErrorCode rval = mbImpl->create_meshset( MESHSET_SET, set_out );
8217  if( MB_SUCCESS != rval ) return rval;
8218 
8219  // Set tag on set
8220  int val = proc_config().proc_rank();
8221  rval = mbImpl->tag_set_data( part_tag(), &set_out, 1, &val );
8222 
8223  if( MB_SUCCESS != rval )
8224  {
8225  mbImpl->delete_entities( &set_out, 1 );
8226  return rval;
8227  }
8228 
8229  if( get_partitioning() )
8230  {
8231  rval = mbImpl->add_entities( get_partitioning(), &set_out, 1 );
8232  if( MB_SUCCESS != rval )
8233  {
8234  mbImpl->delete_entities( &set_out, 1 );
8235  return rval;
8236  }
8237  }
8238 
8239  moab::Range& pSets = this->partition_sets();
8240  if( pSets.index( set_out ) < 0 )
8241  {
8242  pSets.insert( set_out );
8243  }
8244 
8245  return MB_SUCCESS;
8246 }

References moab::Interface::add_entities(), moab::Interface::create_meshset(), moab::Interface::delete_entities(), ErrorCode, get_partitioning(), globalPartCount, moab::Range::index(), moab::Range::insert(), MB_SUCCESS, mbImpl, MESHSET_SET, part_tag(), partition_sets(), proc_config(), moab::ProcConfig::proc_rank(), and moab::Interface::tag_set_data().

◆ define_mpe()

void moab::ParallelComm::define_mpe ( )
private

Definition at line 4228 of file ParallelComm.cpp.

4229 {
4230 #ifdef MOAB_HAVE_MPE
4231  if( myDebug->get_verbosity() == 2 )
4232  {
4233  // Define mpe states used for logging
4234  int success;
4235  MPE_Log_get_state_eventIDs( &IFACE_START, &IFACE_END );
4236  MPE_Log_get_state_eventIDs( &GHOST_START, &GHOST_END );
4237  MPE_Log_get_state_eventIDs( &SHAREDV_START, &SHAREDV_END );
4238  MPE_Log_get_state_eventIDs( &RESOLVE_START, &RESOLVE_END );
4239  MPE_Log_get_state_eventIDs( &ENTITIES_START, &ENTITIES_END );
4240  MPE_Log_get_state_eventIDs( &RHANDLES_START, &RHANDLES_END );
4241  MPE_Log_get_state_eventIDs( &OWNED_START, &OWNED_END );
4242  success = MPE_Describe_state( IFACE_START, IFACE_END, "Resolve interface ents", "green" );
4243  assert( MPE_LOG_OK == success );
4244  success = MPE_Describe_state( GHOST_START, GHOST_END, "Exchange ghost ents", "red" );
4245  assert( MPE_LOG_OK == success );
4246  success = MPE_Describe_state( SHAREDV_START, SHAREDV_END, "Resolve interface vertices", "blue" );
4247  assert( MPE_LOG_OK == success );
4248  success = MPE_Describe_state( RESOLVE_START, RESOLVE_END, "Resolve shared ents", "purple" );
4249  assert( MPE_LOG_OK == success );
4250  success = MPE_Describe_state( ENTITIES_START, ENTITIES_END, "Exchange shared ents", "yellow" );
4251  assert( MPE_LOG_OK == success );
4252  success = MPE_Describe_state( RHANDLES_START, RHANDLES_END, "Remote handles", "cyan" );
4253  assert( MPE_LOG_OK == success );
4254  success = MPE_Describe_state( OWNED_START, OWNED_END, "Exchange owned ents", "black" );
4255  assert( MPE_LOG_OK == success );
4256  }
4257 #endif
4258 }

References moab::DebugOutput::get_verbosity(), MPE_Describe_state, MPE_LOG_OK, and myDebug.

Referenced by resolve_shared_ents().

◆ delete_all_buffers()

void moab::ParallelComm::delete_all_buffers ( )
inlineprivate

reset message buffers to their initial state

delete all buffers, freeing up any memory held by them

Definition at line 1557 of file ParallelComm.hpp.

1558 {
1559  std::vector< Buffer* >::iterator vit;
1560  for( vit = localOwnedBuffs.begin(); vit != localOwnedBuffs.end(); ++vit )
1561  delete( *vit );
1562  localOwnedBuffs.clear();
1563 
1564  for( vit = remoteOwnedBuffs.begin(); vit != remoteOwnedBuffs.end(); ++vit )
1565  delete( *vit );
1566  remoteOwnedBuffs.clear();
1567 }

References localOwnedBuffs, and remoteOwnedBuffs.

Referenced by ~ParallelComm().

◆ delete_entities()

ErrorCode moab::ParallelComm::delete_entities ( Range to_delete)

Definition at line 9258 of file ParallelComm.cpp.

9259 {
9260  // Will not look at shared sets yet, but maybe we should
9261  // First, see if any of the entities to delete is shared; then inform the other processors
9262  // about their fate (to be deleted), using a crystal router transfer
9263  ErrorCode rval = MB_SUCCESS;
9264  unsigned char pstat;
9265  EntityHandle tmp_handles[MAX_SHARING_PROCS];
9266  int tmp_procs[MAX_SHARING_PROCS];
9267  unsigned int num_ps;
9268  TupleList ents_to_delete;
9269  ents_to_delete.initialize( 1, 0, 1, 0, to_delete.size() * ( MAX_SHARING_PROCS + 1 ) ); // A little bit of overkill
9270  ents_to_delete.enableWriteAccess();
9271  unsigned int i = 0;
9272  for( Range::iterator it = to_delete.begin(); it != to_delete.end(); ++it )
9273  {
9274  EntityHandle eh = *it; // Entity to be deleted
9275 
9276  rval = get_sharing_data( eh, tmp_procs, tmp_handles, pstat, num_ps );
9277  if( rval != MB_SUCCESS || num_ps == 0 ) continue;
9278  // Add to the tuple list the information to be sent (to the remote procs)
9279  for( unsigned int p = 0; p < num_ps; p++ )
9280  {
9281  ents_to_delete.vi_wr[i] = tmp_procs[p];
9282  ents_to_delete.vul_wr[i] = (unsigned long)tmp_handles[p];
9283  i++;
9284  ents_to_delete.inc_n();
9285  }
9286  }
9287 
9288  gs_data::crystal_data* cd = this->procConfig.crystal_router();
9289  // All communication happens here; no other mpi calls
9290  // Also, this is a collective call
9291  rval = cd->gs_transfer( 1, ents_to_delete, 0 );MB_CHK_SET_ERR( rval, "Error in tuple transfer" );
9292 
9293  // Add to the range of ents to delete the new ones that were sent from other procs
9294  unsigned int received = ents_to_delete.get_n();
9295  for( i = 0; i < received; i++ )
9296  {
9297  // int from = ents_to_delete.vi_rd[i];
9298  unsigned long valrec = ents_to_delete.vul_rd[i];
9299  to_delete.insert( (EntityHandle)valrec );
9300  }
9301  rval = mbImpl->delete_entities( to_delete );MB_CHK_SET_ERR( rval, "Error in deleting actual entities" );
9302 
9303  std::set< EntityHandle > good_ents;
9304  for( std::set< EntityHandle >::iterator sst = sharedEnts.begin(); sst != sharedEnts.end(); sst++ )
9305  {
9306  EntityHandle eh = *sst;
9307  int index = to_delete.index( eh );
9308  if( -1 == index ) good_ents.insert( eh );
9309  }
9310  sharedEnts = good_ents;
9311 
9312  // What about shared sets? Who is updating them?
9313  return MB_SUCCESS;
9314 }

References moab::Range::begin(), moab::ProcConfig::crystal_router(), moab::Interface::delete_entities(), moab::TupleList::enableWriteAccess(), moab::Range::end(), ErrorCode, moab::TupleList::get_n(), get_sharing_data(), moab::TupleList::inc_n(), moab::Range::index(), moab::TupleList::initialize(), moab::Range::insert(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, procConfig, sharedEnts, moab::Range::size(), moab::TupleList::vi_wr, moab::TupleList::vul_rd, and moab::TupleList::vul_wr.

Referenced by moab::NCHelperScrip::create_mesh().

◆ destroy_part()

ErrorCode moab::ParallelComm::destroy_part ( EntityHandle  part)

Definition at line 8248 of file ParallelComm.cpp.

8249 {
8250  // Mark as invalid so we know that it needs to be updated
8251  globalPartCount = -1;
8252 
8253  ErrorCode rval;
8254  if( get_partitioning() )
8255  {
8256  rval = mbImpl->remove_entities( get_partitioning(), &part_id, 1 );
8257  if( MB_SUCCESS != rval ) return rval;
8258  }
8259 
8260  moab::Range& pSets = this->partition_sets();
8261  if( pSets.index( part_id ) >= 0 )
8262  {
8263  pSets.erase( part_id );
8264  }
8265  return mbImpl->delete_entities( &part_id, 1 );
8266 }

References moab::Interface::delete_entities(), moab::Range::erase(), ErrorCode, get_partitioning(), globalPartCount, moab::Range::index(), MB_SUCCESS, mbImpl, partition_sets(), and moab::Interface::remove_entities().

◆ estimate_ents_buffer_size()

int moab::ParallelComm::estimate_ents_buffer_size ( Range entities,
const bool  store_remote_handles 
)
private

estimate size required to pack entities

Definition at line 1503 of file ParallelComm.cpp.

1504 {
1505  int buff_size = 0;
1506  std::vector< EntityHandle > dum_connect_vec;
1507  const EntityHandle* connect;
1508  int num_connect;
1509 
1510  int num_verts = entities.num_of_type( MBVERTEX );
1511  // # verts + coords + handles
1512  buff_size += 2 * sizeof( int ) + 3 * sizeof( double ) * num_verts;
1513  if( store_remote_handles ) buff_size += sizeof( EntityHandle ) * num_verts;
1514 
1515  // Do a rough count by looking at first entity of each type
1516  for( EntityType t = MBEDGE; t < MBENTITYSET; t++ )
1517  {
1518  const Range::iterator rit = entities.lower_bound( t );
1519  if( TYPE_FROM_HANDLE( *rit ) != t ) continue;
1520 
1521  ErrorCode result = mbImpl->get_connectivity( *rit, connect, num_connect, false, &dum_connect_vec );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get connectivity to estimate buffer size", -1 );
1522 
1523  // Number, type, nodes per entity
1524  buff_size += 3 * sizeof( int );
1525  int num_ents = entities.num_of_type( t );
1526  // Connectivity, handle for each ent
1527  buff_size += ( num_connect + 1 ) * sizeof( EntityHandle ) * num_ents;
1528  }
1529 
1530  // Extra entity type at end, passed as int
1531  buff_size += sizeof( int );
1532 
1533  return buff_size;
1534 }

References entities, ErrorCode, moab::Interface::get_connectivity(), MB_CHK_SET_ERR_RET_VAL, MBEDGE, MBENTITYSET, mbImpl, MBVERTEX, and moab::TYPE_FROM_HANDLE().

Referenced by pack_entities().

◆ estimate_sets_buffer_size()

int moab::ParallelComm::estimate_sets_buffer_size ( Range entities,
const bool  store_remote_handles 
)
private

estimate size required to pack sets

Definition at line 1536 of file ParallelComm.cpp.

1537 {
1538  // Number of sets
1539  int buff_size = sizeof( int );
1540 
1541  // Do a rough count by looking at first entity of each type
1542  Range::iterator rit = entities.lower_bound( MBENTITYSET );
1543  ErrorCode result;
1544 
1545  for( ; rit != entities.end(); ++rit )
1546  {
1547  unsigned int options;
1548  result = mbImpl->get_meshset_options( *rit, options );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get meshset options", -1 );
1549 
1550  buff_size += sizeof( int );
1551 
1552  Range set_range;
1553  if( options & MESHSET_SET )
1554  {
1555  // Range-based set; count the subranges
1556  result = mbImpl->get_entities_by_handle( *rit, set_range );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get set entities", -1 );
1557 
1558  // Set range
1559  buff_size += RANGE_SIZE( set_range );
1560  }
1561  else if( options & MESHSET_ORDERED )
1562  {
1563  // Just get the number of entities in the set
1564  int num_ents;
1565  result = mbImpl->get_number_entities_by_handle( *rit, num_ents );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get number entities in ordered set", -1 );
1566 
1567  // Set vec
1568  buff_size += sizeof( EntityHandle ) * num_ents + sizeof( int );
1569  }
1570 
1571  // Get numbers of parents/children
1572  int num_par, num_ch;
1573  result = mbImpl->num_child_meshsets( *rit, &num_ch );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get num children", -1 );
1574  result = mbImpl->num_parent_meshsets( *rit, &num_par );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get num parents", -1 );
1575 
1576  buff_size += ( num_ch + num_par ) * sizeof( EntityHandle ) + 2 * sizeof( int );
1577  }
1578 
1579  return buff_size;
1580 }

References entities, ErrorCode, moab::Interface::get_entities_by_handle(), moab::Interface::get_meshset_options(), moab::Interface::get_number_entities_by_handle(), MB_CHK_SET_ERR_RET_VAL, MBENTITYSET, mbImpl, MESHSET_SET, moab::Interface::num_child_meshsets(), moab::Interface::num_parent_meshsets(), and moab::RANGE_SIZE().

Referenced by pack_sets().

◆ exchange_all_shared_handles()

ErrorCode moab::ParallelComm::exchange_all_shared_handles ( std::vector< std::vector< SharedEntityData > > &  send_data,
std::vector< std::vector< SharedEntityData > > &  result 
)
private

Every processor sends shared entity handle data to every other processor that it shares entities with. Passed back map is all received data, indexed by processor ID. This function is intended to be used for debugging.

Definition at line 8474 of file ParallelComm.cpp.

8476 {
8477  int ierr;
8478  const int tag = 0;
8479  const MPI_Comm cm = procConfig.proc_comm();
8480  const int num_proc = buffProcs.size();
8481  const std::vector< int > procs( buffProcs.begin(), buffProcs.end() );
8482  std::vector< MPI_Request > recv_req( buffProcs.size(), MPI_REQUEST_NULL );
8483  std::vector< MPI_Request > send_req( buffProcs.size(), MPI_REQUEST_NULL );
8484 
8485  // Set up to receive sizes
8486  std::vector< int > sizes_send( num_proc ), sizes_recv( num_proc );
8487  for( int i = 0; i < num_proc; i++ )
8488  {
8489  ierr = MPI_Irecv( &sizes_recv[i], 1, MPI_INT, procs[i], tag, cm, &recv_req[i] );
8490  if( ierr ) return MB_FILE_WRITE_ERROR;
8491  }
8492 
8493  // Send sizes
8494  assert( num_proc == (int)send_data.size() );
8495 
8496  result.resize( num_proc );
8497  for( int i = 0; i < num_proc; i++ )
8498  {
8499  sizes_send[i] = send_data[i].size();
8500  ierr = MPI_Isend( &sizes_send[i], 1, MPI_INT, buffProcs[i], tag, cm, &send_req[i] );
8501  if( ierr ) return MB_FILE_WRITE_ERROR;
8502  }
8503 
8504  // Receive sizes
8505  std::vector< MPI_Status > stat( num_proc );
8506  ierr = MPI_Waitall( num_proc, &recv_req[0], &stat[0] );
8507  if( ierr ) return MB_FILE_WRITE_ERROR;
8508 
8509  // Wait until all sizes are sent (clean up pending req's)
8510  ierr = MPI_Waitall( num_proc, &send_req[0], &stat[0] );
8511  if( ierr ) return MB_FILE_WRITE_ERROR;
8512 
8513  // Set up to receive data
8514  for( int i = 0; i < num_proc; i++ )
8515  {
8516  result[i].resize( sizes_recv[i] );
8517  ierr = MPI_Irecv( (void*)( &( result[i][0] ) ), sizeof( SharedEntityData ) * sizes_recv[i], MPI_UNSIGNED_CHAR,
8518  buffProcs[i], tag, cm, &recv_req[i] );
8519  if( ierr ) return MB_FILE_WRITE_ERROR;
8520  }
8521 
8522  // Send data
8523  for( int i = 0; i < num_proc; i++ )
8524  {
8525  ierr = MPI_Isend( (void*)( &( send_data[i][0] ) ), sizeof( SharedEntityData ) * sizes_send[i],
8526  MPI_UNSIGNED_CHAR, buffProcs[i], tag, cm, &send_req[i] );
8527  if( ierr ) return MB_FILE_WRITE_ERROR;
8528  }
8529 
8530  // Receive data
8531  ierr = MPI_Waitall( num_proc, &recv_req[0], &stat[0] );
8532  if( ierr ) return MB_FILE_WRITE_ERROR;
8533 
8534  // Wait until everything is sent to release send buffers
8535  ierr = MPI_Waitall( num_proc, &send_req[0], &stat[0] );
8536  if( ierr ) return MB_FILE_WRITE_ERROR;
8537 
8538  return MB_SUCCESS;
8539 }

References buffProcs, MB_FILE_WRITE_ERROR, MB_SUCCESS, moab::ProcConfig::proc_comm(), and procConfig.

Referenced by check_all_shared_handles(), and correct_thin_ghost_layers().

◆ exchange_ghost_cells() [1/2]

ErrorCode moab::ParallelComm::exchange_ghost_cells ( int  ghost_dim,
int  bridge_dim,
int  num_layers,
int  addl_ents,
bool  store_remote_handles,
bool  wait_all = true,
EntityHandle file_set = NULL 
)

Exchange ghost cells with neighboring procs Neighboring processors are those sharing an interface with this processor. All entities of dimension ghost_dim within num_layers of interface, measured going through bridge_dim, are exchanged. See MeshTopoUtil::get_bridge_adjacencies for description of bridge adjacencies. If wait_all is false and store_remote_handles is true, MPI_Request objects are available in the sendReqs[2*MAX_SHARING_PROCS] member array, with inactive requests marked as MPI_REQUEST_NULL. If store_remote_handles or wait_all is false, this function returns after all entities have been received and processed.

Parameters
ghost_dimDimension of ghost entities to be exchanged
bridge_dimDimension of entities used to measure layers from interface
num_layersNumber of layers of ghosts requested
addl_entsDimension of additional adjacent entities to exchange with ghosts, 0 if none
store_remote_handlesIf true, send message with new entity handles to source processor
wait_allIf true, function does not return until all send buffers are cleared.

Definition at line 5687 of file ParallelComm.cpp.

5694 {
5695 #ifdef MOAB_HAVE_MPE
5696  if( myDebug->get_verbosity() == 2 )
5697  {
5698  if( !num_layers )
5699  MPE_Log_event( IFACE_START, procConfig.proc_rank(), "Starting interface exchange." );
5700  else
5701  MPE_Log_event( GHOST_START, procConfig.proc_rank(), "Starting ghost exchange." );
5702  }
5703 #endif
5704 
5705  myDebug->tprintf( 1, "Entering exchange_ghost_cells with num_layers = %d\n", num_layers );
5706  if( myDebug->get_verbosity() == 4 )
5707  {
5708  msgs.clear();
5709  msgs.reserve( MAX_SHARING_PROCS );
5710  }
5711 
5712  // If we're only finding out about existing ents, we have to be storing
5713  // remote handles too
5714  assert( num_layers > 0 || store_remote_handles );
5715 
5716  const bool is_iface = !num_layers;
5717 
5718  // Get the b-dimensional interface(s) with with_proc, where b = bridge_dim
5719 
5720  int success;
5721  ErrorCode result = MB_SUCCESS;
5722  int incoming1 = 0, incoming2 = 0;
5723 
5725 
5726  // When this function is called, buffProcs should already have any
5727  // communicating procs
5728 
5729  //===========================================
5730  // Post ghost irecv's for ghost entities from all communicating procs
5731  //===========================================
5732 #ifdef MOAB_HAVE_MPE
5733  if( myDebug->get_verbosity() == 2 )
5734  {
5735  MPE_Log_event( ENTITIES_START, procConfig.proc_rank(), "Starting entity exchange." );
5736  }
5737 #endif
5738 
5739  // Index reqs the same as buffer/sharing procs indices
5740  std::vector< MPI_Request > recv_ent_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL ),
5741  recv_remoteh_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL );
5742  std::vector< unsigned int >::iterator proc_it;
5743  int ind, p;
5744  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
5745  for( ind = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, ind++ )
5746  {
5747  incoming1++;
5749  MB_MESG_ENTS_SIZE, incoming1 );
5750  success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, buffProcs[ind],
5751  MB_MESG_ENTS_SIZE, procConfig.proc_comm(), &recv_ent_reqs[3 * ind] );
5752  if( success != MPI_SUCCESS )
5753  {
5754  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in ghost exchange" );
5755  }
5756  }
5757 
5758  //===========================================
5759  // Get entities to be sent to neighbors
5760  //===========================================
5761  Range sent_ents[MAX_SHARING_PROCS], allsent, tmp_range;
5762  TupleList entprocs;
5763  int dum_ack_buff;
5764  result = get_sent_ents( is_iface, bridge_dim, ghost_dim, num_layers, addl_ents, sent_ents, allsent, entprocs );MB_CHK_SET_ERR( result, "get_sent_ents failed" );
5765 
5766  // augment file set with the entities to be sent
5767  // we might have created new entities if addl_ents>0, edges and/or faces
5768  if( addl_ents > 0 && file_set && !allsent.empty() )
5769  {
5770  result = mbImpl->add_entities( *file_set, allsent );MB_CHK_SET_ERR( result, "Failed to add new sub-entities to set" );
5771  }
5772  myDebug->tprintf( 1, "allsent ents compactness (size) = %f (%lu)\n", allsent.compactness(),
5773  (unsigned long)allsent.size() );
5774 
5775  //===========================================
5776  // Pack and send ents from this proc to others
5777  //===========================================
5778  for( p = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, p++ )
5779  {
5780  myDebug->tprintf( 1, "Sent ents compactness (size) = %f (%lu)\n", sent_ents[p].compactness(),
5781  (unsigned long)sent_ents[p].size() );
5782 
5783  // Reserve space on front for size and for initial buff size
5784  localOwnedBuffs[p]->reset_buffer( sizeof( int ) );
5785 
5786  // Entities
5787  result = pack_entities( sent_ents[p], localOwnedBuffs[p], store_remote_handles, buffProcs[p], is_iface,
5788  &entprocs, &allsent );MB_CHK_SET_ERR( result, "Packing entities failed" );
5789 
5790  if( myDebug->get_verbosity() == 4 )
5791  {
5792  msgs.resize( msgs.size() + 1 );
5793  msgs.back() = new Buffer( *localOwnedBuffs[p] );
5794  }
5795 
5796  // Send the buffer (size stored in front in send_buffer)
5797  result = send_buffer( *proc_it, localOwnedBuffs[p], MB_MESG_ENTS_SIZE, sendReqs[3 * p],
5798  recv_ent_reqs[3 * p + 2], &dum_ack_buff, incoming1, MB_MESG_REMOTEH_SIZE,
5799  ( !is_iface && store_remote_handles ? // this used for ghosting only
5800  localOwnedBuffs[p]
5801  : NULL ),
5802  &recv_remoteh_reqs[3 * p], &incoming2 );MB_CHK_SET_ERR( result, "Failed to Isend in ghost exchange" );
5803  }
5804 
5805  entprocs.reset();
5806 
5807  //===========================================
5808  // Receive/unpack new entities
5809  //===========================================
5810  // Number of incoming messages for ghosts is the number of procs we
5811  // communicate with; for iface, it's the number of those with lower rank
5812  MPI_Status status;
5813  std::vector< std::vector< EntityHandle > > recd_ents( buffProcs.size() );
5814  std::vector< std::vector< EntityHandle > > L1hloc( buffProcs.size() ), L1hrem( buffProcs.size() );
5815  std::vector< std::vector< int > > L1p( buffProcs.size() );
5816  std::vector< EntityHandle > L2hloc, L2hrem;
5817  std::vector< unsigned int > L2p;
5818  std::vector< EntityHandle > new_ents;
5819 
5820  while( incoming1 )
5821  {
5822  // Wait for all recvs of ghost ents before proceeding to sending remote handles,
5823  // b/c some procs may have sent to a 3rd proc ents owned by me;
5825 
5826  success = MPI_Waitany( 3 * buffProcs.size(), &recv_ent_reqs[0], &ind, &status );
5827  if( MPI_SUCCESS != success )
5828  {
5829  MB_SET_ERR( MB_FAILURE, "Failed in waitany in ghost exchange" );
5830  }
5831 
5832  PRINT_DEBUG_RECD( status );
5833 
5834  // OK, received something; decrement incoming counter
5835  incoming1--;
5836  bool done = false;
5837 
5838  // In case ind is for ack, we need index of one before it
5839  unsigned int base_ind = 3 * ( ind / 3 );
5840  result = recv_buffer( MB_MESG_ENTS_SIZE, status, remoteOwnedBuffs[ind / 3], recv_ent_reqs[base_ind + 1],
5841  recv_ent_reqs[base_ind + 2], incoming1, localOwnedBuffs[ind / 3], sendReqs[base_ind + 1],
5842  sendReqs[base_ind + 2], done,
5843  ( !is_iface && store_remote_handles ? localOwnedBuffs[ind / 3] : NULL ),
5844  MB_MESG_REMOTEH_SIZE, // maybe base_ind+1?
5845  &recv_remoteh_reqs[base_ind + 1], &incoming2 );MB_CHK_SET_ERR( result, "Failed to receive buffer" );
5846 
5847  if( done )
5848  {
5849  if( myDebug->get_verbosity() == 4 )
5850  {
5851  msgs.resize( msgs.size() + 1 );
5852  msgs.back() = new Buffer( *remoteOwnedBuffs[ind / 3] );
5853  }
5854 
5855  // Message completely received - process buffer that was sent
5856  remoteOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) );
5857  result = unpack_entities( remoteOwnedBuffs[ind / 3]->buff_ptr, store_remote_handles, ind / 3, is_iface,
5858  L1hloc, L1hrem, L1p, L2hloc, L2hrem, L2p, new_ents );
5859  if( MB_SUCCESS != result )
5860  {
5861  std::cout << "Failed to unpack entities. Buffer contents:" << std::endl;
5862  print_buffer( remoteOwnedBuffs[ind / 3]->mem_ptr, MB_MESG_ENTS_SIZE, buffProcs[ind / 3], false );
5863  return result;
5864  }
5865 
5866  if( recv_ent_reqs.size() != 3 * buffProcs.size() )
5867  {
5868  // Post irecv's for remote handles from new proc; shouldn't be iface,
5869  // since we know about all procs we share with
5870  assert( !is_iface );
5871  recv_remoteh_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
5872  for( unsigned int i = recv_ent_reqs.size(); i < 3 * buffProcs.size(); i += 3 )
5873  {
5874  localOwnedBuffs[i / 3]->reset_buffer();
5875  incoming2++;
5876  PRINT_DEBUG_IRECV( procConfig.proc_rank(), buffProcs[i / 3], localOwnedBuffs[i / 3]->mem_ptr,
5878  success = MPI_Irecv( localOwnedBuffs[i / 3]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR,
5880  &recv_remoteh_reqs[i] );
5881  if( success != MPI_SUCCESS )
5882  {
5883  MB_SET_ERR( MB_FAILURE, "Failed to post irecv for remote handles in ghost exchange" );
5884  }
5885  }
5886  recv_ent_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
5887  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
5888  }
5889  }
5890  }
5891 
5892  // Add requests for any new addl procs
5893  if( recv_ent_reqs.size() != 3 * buffProcs.size() )
5894  {
5895  // Shouldn't get here...
5896  MB_SET_ERR( MB_FAILURE, "Requests length doesn't match proc count in ghost exchange" );
5897  }
5898 
5899 #ifdef MOAB_HAVE_MPE
5900  if( myDebug->get_verbosity() == 2 )
5901  {
5902  MPE_Log_event( ENTITIES_END, procConfig.proc_rank(), "Ending entity exchange." );
5903  }
5904 #endif
5905 
5906  if( is_iface )
5907  {
5908  // Need to check over entities I sent and make sure I received
5909  // handles for them from all expected procs; if not, need to clean
5910  // them up
5911  result = check_clean_iface( allsent );
5912  if( MB_SUCCESS != result ) std::cout << "Failed check." << std::endl;
5913 
5914  // Now set the shared/interface tag on non-vertex entities on interface
5915  result = tag_iface_entities();MB_CHK_SET_ERR( result, "Failed to tag iface entities" );
5916 
5917 #ifndef NDEBUG
5918  result = check_sent_ents( allsent );
5919  if( MB_SUCCESS != result ) std::cout << "Failed check." << std::endl;
5920  result = check_all_shared_handles( true );
5921  if( MB_SUCCESS != result ) std::cout << "Failed check." << std::endl;
5922 #endif
5923 
5924 #ifdef MOAB_HAVE_MPE
5925  if( myDebug->get_verbosity() == 2 )
5926  {
5927  MPE_Log_event( IFACE_END, procConfig.proc_rank(), "Ending interface exchange." );
5928  }
5929 #endif
5930 
5931  //===========================================
5932  // Wait if requested
5933  //===========================================
5934  if( wait_all )
5935  {
5936  if( myDebug->get_verbosity() == 5 )
5937  {
5938  success = MPI_Barrier( procConfig.proc_comm() );
5939  }
5940  else
5941  {
5942  MPI_Status mult_status[3 * MAX_SHARING_PROCS];
5943  success = MPI_Waitall( 3 * buffProcs.size(), &recv_ent_reqs[0], mult_status );
5944  if( MPI_SUCCESS != success )
5945  {
5946  MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" );
5947  }
5948  success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status );
5949  if( MPI_SUCCESS != success )
5950  {
5951  MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" );
5952  }
5953  /*success = MPI_Waitall(3*buffProcs.size(), &recv_remoteh_reqs[0], mult_status);
5954  if (MPI_SUCCESS != success) {
5955  MB_SET_ERR(MB_FAILURE, "Failed in waitall in ghost exchange");
5956  }*/
5957  }
5958  }
5959 
5960  myDebug->tprintf( 1, "Total number of shared entities = %lu.\n", (unsigned long)sharedEnts.size() );
5961  myDebug->tprintf( 1, "Exiting exchange_ghost_cells for is_iface==true \n" );
5962 
5963  return MB_SUCCESS;
5964  }
5965 
5966  // we still need to wait on sendReqs, if they are not fulfilled yet
5967  if( wait_all )
5968  {
5969  if( myDebug->get_verbosity() == 5 )
5970  {
5971  success = MPI_Barrier( procConfig.proc_comm() );
5972  }
5973  else
5974  {
5975  MPI_Status mult_status[3 * MAX_SHARING_PROCS];
5976  success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status );
5977  if( MPI_SUCCESS != success )
5978  {
5979  MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" );
5980  }
5981  }
5982  }
5983  //===========================================
5984  // Send local handles for new ghosts to owner, then add
5985  // those to ghost list for that owner
5986  //===========================================
5987  for( p = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, p++ )
5988  {
5989 
5990  // Reserve space on front for size and for initial buff size
5991  remoteOwnedBuffs[p]->reset_buffer( sizeof( int ) );
5992 
5993  result = pack_remote_handles( L1hloc[p], L1hrem[p], L1p[p], *proc_it, remoteOwnedBuffs[p] );MB_CHK_SET_ERR( result, "Failed to pack remote handles" );
5994  remoteOwnedBuffs[p]->set_stored_size();
5995 
5996  if( myDebug->get_verbosity() == 4 )
5997  {
5998  msgs.resize( msgs.size() + 1 );
5999  msgs.back() = new Buffer( *remoteOwnedBuffs[p] );
6000  }
6002  recv_remoteh_reqs[3 * p + 2], &dum_ack_buff, incoming2 );MB_CHK_SET_ERR( result, "Failed to send remote handles" );
6003  }
6004 
6005  //===========================================
6006  // Process remote handles of my ghosteds
6007  //===========================================
6008  while( incoming2 )
6009  {
6011  success = MPI_Waitany( 3 * buffProcs.size(), &recv_remoteh_reqs[0], &ind, &status );
6012  if( MPI_SUCCESS != success )
6013  {
6014  MB_SET_ERR( MB_FAILURE, "Failed in waitany in ghost exchange" );
6015  }
6016 
6017  // OK, received something; decrement incoming counter
6018  incoming2--;
6019 
6020  PRINT_DEBUG_RECD( status );
6021 
6022  bool done = false;
6023  unsigned int base_ind = 3 * ( ind / 3 );
6024  result = recv_buffer( MB_MESG_REMOTEH_SIZE, status, localOwnedBuffs[ind / 3], recv_remoteh_reqs[base_ind + 1],
6025  recv_remoteh_reqs[base_ind + 2], incoming2, remoteOwnedBuffs[ind / 3],
6026  sendReqs[base_ind + 1], sendReqs[base_ind + 2], done );MB_CHK_SET_ERR( result, "Failed to receive remote handles" );
6027  if( done )
6028  {
6029  // Incoming remote handles
6030  if( myDebug->get_verbosity() == 4 )
6031  {
6032  msgs.resize( msgs.size() + 1 );
6033  msgs.back() = new Buffer( *localOwnedBuffs[ind / 3] );
6034  }
6035  localOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) );
6036  result =
6037  unpack_remote_handles( buffProcs[ind / 3], localOwnedBuffs[ind / 3]->buff_ptr, L2hloc, L2hrem, L2p );MB_CHK_SET_ERR( result, "Failed to unpack remote handles" );
6038  }
6039  }
6040 
6041 #ifdef MOAB_HAVE_MPE
6042  if( myDebug->get_verbosity() == 2 )
6043  {
6044  MPE_Log_event( RHANDLES_END, procConfig.proc_rank(), "Ending remote handles." );
6045  MPE_Log_event( GHOST_END, procConfig.proc_rank(), "Ending ghost exchange (still doing checks)." );
6046  }
6047 #endif
6048 
6049  //===========================================
6050  // Wait if requested
6051  //===========================================
6052  if( wait_all )
6053  {
6054  if( myDebug->get_verbosity() == 5 )
6055  {
6056  success = MPI_Barrier( procConfig.proc_comm() );
6057  }
6058  else
6059  {
6060  MPI_Status mult_status[3 * MAX_SHARING_PROCS];
6061  success = MPI_Waitall( 3 * buffProcs.size(), &recv_remoteh_reqs[0], mult_status );
6062  if( MPI_SUCCESS == success ) success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status );
6063  }
6064  if( MPI_SUCCESS != success )
6065  {
6066  MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" );
6067  }
6068  }
6069 
6070 #ifndef NDEBUG
6071  result = check_sent_ents( allsent );MB_CHK_SET_ERR( result, "Failed check on shared entities" );
6072  result = check_all_shared_handles( true );MB_CHK_SET_ERR( result, "Failed check on all shared handles" );
6073 #endif
6074 
6075  if( file_set && !new_ents.empty() )
6076  {
6077  result = mbImpl->add_entities( *file_set, &new_ents[0], new_ents.size() );MB_CHK_SET_ERR( result, "Failed to add new entities to set" );
6078  }
6079 
6080  myDebug->tprintf( 1, "Total number of shared entities = %lu.\n", (unsigned long)sharedEnts.size() );
6081  myDebug->tprintf( 1, "Exiting exchange_ghost_cells for is_iface==false \n" );
6082 
6083  return MB_SUCCESS;
6084 }

References moab::Interface::add_entities(), buffProcs, check_all_shared_handles(), check_clean_iface(), check_sent_ents(), moab::Range::compactness(), moab::Range::empty(), ErrorCode, get_sent_ents(), moab::DebugOutput::get_verbosity(), INITIAL_BUFF_SIZE, localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_ENTS_SIZE, moab::MB_MESG_REMOTEH_SIZE, MB_SET_ERR, MB_SUCCESS, mbImpl, MPE_Log_event, moab::msgs, myDebug, pack_entities(), pack_remote_handles(), print_buffer(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, recv_buffer(), remoteOwnedBuffs, moab::TupleList::reset(), reset_all_buffers(), send_buffer(), sendReqs, sharedEnts, moab::Range::size(), size(), tag_iface_entities(), moab::DebugOutput::tprintf(), unpack_entities(), and unpack_remote_handles().

Referenced by moab::NestedRefine::exchange_ghosts(), moab::ReadParallel::load_file(), main(), resolve_shared_ents(), and moab::ParallelMergeMesh::TagSharedElements().

◆ exchange_ghost_cells() [2/2]

ErrorCode moab::ParallelComm::exchange_ghost_cells ( ParallelComm **  pc,
unsigned int  num_procs,
int  ghost_dim,
int  bridge_dim,
int  num_layers,
int  addl_ents,
bool  store_remote_handles,
EntityHandle file_sets = NULL 
)
static

Static version of exchange_ghost_cells, exchanging info through buffers rather than messages.

Definition at line 6588 of file ParallelComm.cpp.

6596 {
6597  // Static version of function, exchanging info through buffers rather
6598  // than through messages
6599 
6600  // If we're only finding out about existing ents, we have to be storing
6601  // remote handles too
6602  assert( num_layers > 0 || store_remote_handles );
6603 
6604  const bool is_iface = !num_layers;
6605 
6606  unsigned int ind;
6607  ParallelComm* pc;
6608  ErrorCode result = MB_SUCCESS;
6609 
6610  std::vector< Error* > ehs( num_procs );
6611  for( unsigned int i = 0; i < num_procs; i++ )
6612  {
6613  result = pcs[i]->get_moab()->query_interface( ehs[i] );
6614  assert( MB_SUCCESS == result );
6615  }
6616 
6617  // When this function is called, buffProcs should already have any
6618  // communicating procs
6619 
6620  //===========================================
6621  // Get entities to be sent to neighbors
6622  //===========================================
6623 
6624  // Done in a separate loop over procs because sometimes later procs
6625  // need to add info to earlier procs' messages
6626  Range sent_ents[MAX_SHARING_PROCS][MAX_SHARING_PROCS], allsent[MAX_SHARING_PROCS];
6627 
6628  //===========================================
6629  // Get entities to be sent to neighbors
6630  //===========================================
6631  TupleList entprocs[MAX_SHARING_PROCS];
6632  for( unsigned int p = 0; p < num_procs; p++ )
6633  {
6634  pc = pcs[p];
6635  result = pc->get_sent_ents( is_iface, bridge_dim, ghost_dim, num_layers, addl_ents, sent_ents[p], allsent[p],
6636  entprocs[p] );MB_CHK_SET_ERR( result, "p = " << p << ", get_sent_ents failed" );
6637 
6638  //===========================================
6639  // Pack entities into buffers
6640  //===========================================
6641  for( ind = 0; ind < pc->buffProcs.size(); ind++ )
6642  {
6643  // Entities
6644  pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
6645  result = pc->pack_entities( sent_ents[p][ind], pc->localOwnedBuffs[ind], store_remote_handles,
6646  pc->buffProcs[ind], is_iface, &entprocs[p], &allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", packing entities failed" );
6647  }
6648 
6649  entprocs[p].reset();
6650  }
6651 
6652  //===========================================
6653  // Receive/unpack new entities
6654  //===========================================
6655  // Number of incoming messages for ghosts is the number of procs we
6656  // communicate with; for iface, it's the number of those with lower rank
6657  std::vector< std::vector< EntityHandle > > L1hloc[MAX_SHARING_PROCS], L1hrem[MAX_SHARING_PROCS];
6658  std::vector< std::vector< int > > L1p[MAX_SHARING_PROCS];
6659  std::vector< EntityHandle > L2hloc[MAX_SHARING_PROCS], L2hrem[MAX_SHARING_PROCS];
6660  std::vector< unsigned int > L2p[MAX_SHARING_PROCS];
6661  std::vector< EntityHandle > new_ents[MAX_SHARING_PROCS];
6662 
6663  for( unsigned int p = 0; p < num_procs; p++ )
6664  {
6665  L1hloc[p].resize( pcs[p]->buffProcs.size() );
6666  L1hrem[p].resize( pcs[p]->buffProcs.size() );
6667  L1p[p].resize( pcs[p]->buffProcs.size() );
6668  }
6669 
6670  for( unsigned int p = 0; p < num_procs; p++ )
6671  {
6672  pc = pcs[p];
6673 
6674  for( ind = 0; ind < pc->buffProcs.size(); ind++ )
6675  {
6676  // Incoming ghost entities; unpack; returns entities received
6677  // both from sending proc and from owning proc (which may be different)
6678 
6679  // Buffer could be empty, which means there isn't any message to
6680  // unpack (due to this comm proc getting added as a result of indirect
6681  // communication); just skip this unpack
6682  if( pc->localOwnedBuffs[ind]->get_stored_size() == 0 ) continue;
6683 
6684  unsigned int to_p = pc->buffProcs[ind];
6685  pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
6686  result = pcs[to_p]->unpack_entities( pc->localOwnedBuffs[ind]->buff_ptr, store_remote_handles, ind,
6687  is_iface, L1hloc[to_p], L1hrem[to_p], L1p[to_p], L2hloc[to_p],
6688  L2hrem[to_p], L2p[to_p], new_ents[to_p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to unpack entities" );
6689  }
6690  }
6691 
6692  if( is_iface )
6693  {
6694  // Need to check over entities I sent and make sure I received
6695  // handles for them from all expected procs; if not, need to clean
6696  // them up
6697  for( unsigned int p = 0; p < num_procs; p++ )
6698  {
6699  result = pcs[p]->check_clean_iface( allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to check on shared entities" );
6700  }
6701 
6702 #ifndef NDEBUG
6703  for( unsigned int p = 0; p < num_procs; p++ )
6704  {
6705  result = pcs[p]->check_sent_ents( allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to check on shared entities" );
6706  }
6707  result = check_all_shared_handles( pcs, num_procs );MB_CHK_SET_ERR( result, "Failed to check on all shared handles" );
6708 #endif
6709  return MB_SUCCESS;
6710  }
6711 
6712  //===========================================
6713  // Send local handles for new ghosts to owner, then add
6714  // those to ghost list for that owner
6715  //===========================================
6716  std::vector< unsigned int >::iterator proc_it;
6717  for( unsigned int p = 0; p < num_procs; p++ )
6718  {
6719  pc = pcs[p];
6720 
6721  for( ind = 0, proc_it = pc->buffProcs.begin(); proc_it != pc->buffProcs.end(); ++proc_it, ind++ )
6722  {
6723  // Skip if iface layer and higher-rank proc
6724  pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
6725  result = pc->pack_remote_handles( L1hloc[p][ind], L1hrem[p][ind], L1p[p][ind], *proc_it,
6726  pc->localOwnedBuffs[ind] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to pack remote handles" );
6727  }
6728  }
6729 
6730  //===========================================
6731  // Process remote handles of my ghosteds
6732  //===========================================
6733  for( unsigned int p = 0; p < num_procs; p++ )
6734  {
6735  pc = pcs[p];
6736 
6737  for( ind = 0, proc_it = pc->buffProcs.begin(); proc_it != pc->buffProcs.end(); ++proc_it, ind++ )
6738  {
6739  // Incoming remote handles
6740  unsigned int to_p = pc->buffProcs[ind];
6741  pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
6742  result = pcs[to_p]->unpack_remote_handles( p, pc->localOwnedBuffs[ind]->buff_ptr, L2hloc[to_p],
6743  L2hrem[to_p], L2p[to_p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to unpack remote handles" );
6744  }
6745  }
6746 
6747 #ifndef NDEBUG
6748  for( unsigned int p = 0; p < num_procs; p++ )
6749  {
6750  result = pcs[p]->check_sent_ents( allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to check on shared entities" );
6751  }
6752 
6753  result = ParallelComm::check_all_shared_handles( pcs, num_procs );MB_CHK_SET_ERR( result, "Failed to check on all shared handles" );
6754 #endif
6755 
6756  if( file_sets )
6757  {
6758  for( unsigned int p = 0; p < num_procs; p++ )
6759  {
6760  if( new_ents[p].empty() ) continue;
6761  result = pcs[p]->get_moab()->add_entities( file_sets[p], &new_ents[p][0], new_ents[p].size() );MB_CHK_SET_ERR( result, "p = " << p << ", failed to add new entities to set" );
6762  }
6763  }
6764 
6765  return MB_SUCCESS;
6766 }

References moab::Interface::add_entities(), buffProcs, check_all_shared_handles(), check_clean_iface(), check_sent_ents(), ErrorCode, get_moab(), get_sent_ents(), localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, pack_entities(), pack_remote_handles(), moab::Interface::query_interface(), moab::TupleList::reset(), size(), unpack_entities(), and unpack_remote_handles().

◆ exchange_owned_mesh()

ErrorCode moab::ParallelComm::exchange_owned_mesh ( std::vector< unsigned int > &  exchange_procs,
std::vector< Range * > &  exchange_ents,
std::vector< MPI_Request > &  recv_ent_reqs,
std::vector< MPI_Request > &  recv_remoteh_reqs,
const bool  recv_posted,
bool  store_remote_handles,
bool  wait_all,
bool  migrate = false 
)

Exchange owned mesh for input mesh entities and sets This function is called twice by exchange_owned_meshs to exchange entities before sets.

Parameters
migrateif the owner if entities are changed or not

Definition at line 6912 of file ParallelComm.cpp.

6920 {
6921 #ifdef MOAB_HAVE_MPE
6922  if( myDebug->get_verbosity() == 2 )
6923  {
6924  MPE_Log_event( OWNED_START, procConfig.proc_rank(), "Starting owned ents exchange." );
6925  }
6926 #endif
6927 
6928  myDebug->tprintf( 1, "Entering exchange_owned_mesh\n" );
6929  if( myDebug->get_verbosity() == 4 )
6930  {
6931  msgs.clear();
6932  msgs.reserve( MAX_SHARING_PROCS );
6933  }
6934  unsigned int i;
6935  int ind, success;
6936  ErrorCode result = MB_SUCCESS;
6937  int incoming1 = 0, incoming2 = 0;
6938 
6939  // Set buffProcs with communicating procs
6940  unsigned int n_proc = exchange_procs.size();
6941  for( i = 0; i < n_proc; i++ )
6942  {
6943  ind = get_buffers( exchange_procs[i] );
6944  result = add_verts( *exchange_ents[i] );MB_CHK_SET_ERR( result, "Failed to add verts" );
6945 
6946  // Filter out entities already shared with destination
6947  Range tmp_range;
6948  result = filter_pstatus( *exchange_ents[i], PSTATUS_SHARED, PSTATUS_AND, buffProcs[ind], &tmp_range );MB_CHK_SET_ERR( result, "Failed to filter on owner" );
6949  if( !tmp_range.empty() )
6950  {
6951  *exchange_ents[i] = subtract( *exchange_ents[i], tmp_range );
6952  }
6953  }
6954 
6955  //===========================================
6956  // Post ghost irecv's for entities from all communicating procs
6957  //===========================================
6958 #ifdef MOAB_HAVE_MPE
6959  if( myDebug->get_verbosity() == 2 )
6960  {
6961  MPE_Log_event( ENTITIES_START, procConfig.proc_rank(), "Starting entity exchange." );
6962  }
6963 #endif
6964 
6965  // Index reqs the same as buffer/sharing procs indices
6966  if( !recv_posted )
6967  {
6969  recv_ent_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
6970  recv_remoteh_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
6971  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
6972 
6973  for( i = 0; i < n_proc; i++ )
6974  {
6975  ind = get_buffers( exchange_procs[i] );
6976  incoming1++;
6978  INITIAL_BUFF_SIZE, MB_MESG_ENTS_SIZE, incoming1 );
6979  success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, buffProcs[ind],
6980  MB_MESG_ENTS_SIZE, procConfig.proc_comm(), &recv_ent_reqs[3 * ind] );
6981  if( success != MPI_SUCCESS )
6982  {
6983  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in owned entity exchange" );
6984  }
6985  }
6986  }
6987  else
6988  incoming1 += n_proc;
6989 
6990  //===========================================
6991  // Get entities to be sent to neighbors
6992  // Need to get procs each entity is sent to
6993  //===========================================
6994  Range allsent, tmp_range;
6995  int dum_ack_buff;
6996  int npairs = 0;
6997  TupleList entprocs;
6998  for( i = 0; i < n_proc; i++ )
6999  {
7000  int n_ents = exchange_ents[i]->size();
7001  if( n_ents > 0 )
7002  {
7003  npairs += n_ents; // Get the total # of proc/handle pairs
7004  allsent.merge( *exchange_ents[i] );
7005  }
7006  }
7007 
7008  // Allocate a TupleList of that size
7009  entprocs.initialize( 1, 0, 1, 0, npairs );
7010  entprocs.enableWriteAccess();
7011 
7012  // Put the proc/handle pairs in the list
7013  for( i = 0; i < n_proc; i++ )
7014  {
7015  for( Range::iterator rit = exchange_ents[i]->begin(); rit != exchange_ents[i]->end(); ++rit )
7016  {
7017  entprocs.vi_wr[entprocs.get_n()] = exchange_procs[i];
7018  entprocs.vul_wr[entprocs.get_n()] = *rit;
7019  entprocs.inc_n();
7020  }
7021  }
7022 
7023  // Sort by handle
7024  moab::TupleList::buffer sort_buffer;
7025  sort_buffer.buffer_init( npairs );
7026  entprocs.sort( 1, &sort_buffer );
7027  sort_buffer.reset();
7028 
7029  myDebug->tprintf( 1, "allsent ents compactness (size) = %f (%lu)\n", allsent.compactness(),
7030  (unsigned long)allsent.size() );
7031 
7032  //===========================================
7033  // Pack and send ents from this proc to others
7034  //===========================================
7035  for( i = 0; i < n_proc; i++ )
7036  {
7037  ind = get_buffers( exchange_procs[i] );
7038  myDebug->tprintf( 1, "Sent ents compactness (size) = %f (%lu)\n", exchange_ents[i]->compactness(),
7039  (unsigned long)exchange_ents[i]->size() );
7040  // Reserve space on front for size and for initial buff size
7041  localOwnedBuffs[ind]->reset_buffer( sizeof( int ) );
7042  result = pack_buffer( *exchange_ents[i], false, true, store_remote_handles, buffProcs[ind],
7043  localOwnedBuffs[ind], &entprocs, &allsent );
7044 
7045  if( myDebug->get_verbosity() == 4 )
7046  {
7047  msgs.resize( msgs.size() + 1 );
7048  msgs.back() = new Buffer( *localOwnedBuffs[ind] );
7049  }
7050 
7051  // Send the buffer (size stored in front in send_buffer)
7052  result = send_buffer( exchange_procs[i], localOwnedBuffs[ind], MB_MESG_ENTS_SIZE, sendReqs[3 * ind],
7053  recv_ent_reqs[3 * ind + 2], &dum_ack_buff, incoming1, MB_MESG_REMOTEH_SIZE,
7054  ( store_remote_handles ? localOwnedBuffs[ind] : NULL ), &recv_remoteh_reqs[3 * ind],
7055  &incoming2 );MB_CHK_SET_ERR( result, "Failed to Isend in ghost exchange" );
7056  }
7057 
7058  entprocs.reset();
7059 
7060  //===========================================
7061  // Receive/unpack new entities
7062  //===========================================
7063  // Number of incoming messages is the number of procs we communicate with
7064  MPI_Status status;
7065  std::vector< std::vector< EntityHandle > > recd_ents( buffProcs.size() );
7066  std::vector< std::vector< EntityHandle > > L1hloc( buffProcs.size() ), L1hrem( buffProcs.size() );
7067  std::vector< std::vector< int > > L1p( buffProcs.size() );
7068  std::vector< EntityHandle > L2hloc, L2hrem;
7069  std::vector< unsigned int > L2p;
7070  std::vector< EntityHandle > new_ents;
7071 
7072  while( incoming1 )
7073  {
7074  // Wait for all recvs of ents before proceeding to sending remote handles,
7075  // b/c some procs may have sent to a 3rd proc ents owned by me;
7077 
7078  success = MPI_Waitany( 3 * buffProcs.size(), &recv_ent_reqs[0], &ind, &status );
7079  if( MPI_SUCCESS != success )
7080  {
7081  MB_SET_ERR( MB_FAILURE, "Failed in waitany in owned entity exchange" );
7082  }
7083 
7084  PRINT_DEBUG_RECD( status );
7085 
7086  // OK, received something; decrement incoming counter
7087  incoming1--;
7088  bool done = false;
7089 
7090  // In case ind is for ack, we need index of one before it
7091  unsigned int base_ind = 3 * ( ind / 3 );
7092  result = recv_buffer( MB_MESG_ENTS_SIZE, status, remoteOwnedBuffs[ind / 3], recv_ent_reqs[base_ind + 1],
7093  recv_ent_reqs[base_ind + 2], incoming1, localOwnedBuffs[ind / 3], sendReqs[base_ind + 1],
7094  sendReqs[base_ind + 2], done, ( store_remote_handles ? localOwnedBuffs[ind / 3] : NULL ),
7095  MB_MESG_REMOTEH_SIZE, &recv_remoteh_reqs[base_ind + 1], &incoming2 );MB_CHK_SET_ERR( result, "Failed to receive buffer" );
7096 
7097  if( done )
7098  {
7099  if( myDebug->get_verbosity() == 4 )
7100  {
7101  msgs.resize( msgs.size() + 1 );
7102  msgs.back() = new Buffer( *remoteOwnedBuffs[ind / 3] );
7103  }
7104 
7105  // Message completely received - process buffer that was sent
7106  remoteOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) );
7107  result = unpack_buffer( remoteOwnedBuffs[ind / 3]->buff_ptr, store_remote_handles, buffProcs[ind / 3],
7108  ind / 3, L1hloc, L1hrem, L1p, L2hloc, L2hrem, L2p, new_ents, true );
7109  if( MB_SUCCESS != result )
7110  {
7111  std::cout << "Failed to unpack entities. Buffer contents:" << std::endl;
7112  print_buffer( remoteOwnedBuffs[ind / 3]->mem_ptr, MB_MESG_ENTS_SIZE, buffProcs[ind / 3], false );
7113  return result;
7114  }
7115 
7116  if( recv_ent_reqs.size() != 3 * buffProcs.size() )
7117  {
7118  // Post irecv's for remote handles from new proc
7119  recv_remoteh_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7120  for( i = recv_ent_reqs.size(); i < 3 * buffProcs.size(); i += 3 )
7121  {
7122  localOwnedBuffs[i / 3]->reset_buffer();
7123  incoming2++;
7124  PRINT_DEBUG_IRECV( procConfig.proc_rank(), buffProcs[i / 3], localOwnedBuffs[i / 3]->mem_ptr,
7126  success = MPI_Irecv( localOwnedBuffs[i / 3]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR,
7128  &recv_remoteh_reqs[i] );
7129  if( success != MPI_SUCCESS )
7130  {
7131  MB_SET_ERR( MB_FAILURE, "Failed to post irecv for remote handles in ghost exchange" );
7132  }
7133  }
7134  recv_ent_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7135  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7136  }
7137  }
7138  }
7139 
7140  // Assign and remove newly created elements from/to receive processor
7141  result = assign_entities_part( new_ents, procConfig.proc_rank() );MB_CHK_SET_ERR( result, "Failed to assign entities to part" );
7142  if( migrate )
7143  {
7144  result = remove_entities_part( allsent, procConfig.proc_rank() );MB_CHK_SET_ERR( result, "Failed to remove entities to part" );
7145  }
7146 
7147  // Add requests for any new addl procs
7148  if( recv_ent_reqs.size() != 3 * buffProcs.size() )
7149  {
7150  // Shouldn't get here...
7151  MB_SET_ERR( MB_FAILURE, "Requests length doesn't match proc count in entity exchange" );
7152  }
7153 
7154 #ifdef MOAB_HAVE_MPE
7155  if( myDebug->get_verbosity() == 2 )
7156  {
7157  MPE_Log_event( ENTITIES_END, procConfig.proc_rank(), "Ending entity exchange." );
7158  }
7159 #endif
7160 
7161  // we still need to wait on sendReqs, if they are not fulfilled yet
7162  if( wait_all )
7163  {
7164  if( myDebug->get_verbosity() == 5 )
7165  {
7166  success = MPI_Barrier( procConfig.proc_comm() );
7167  }
7168  else
7169  {
7170  MPI_Status mult_status[3 * MAX_SHARING_PROCS];
7171  success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status );
7172  if( MPI_SUCCESS != success )
7173  {
7174  MB_SET_ERR( MB_FAILURE, "Failed in waitall in exchange owned mesh" );
7175  }
7176  }
7177  }
7178 
7179  //===========================================
7180  // Send local handles for new entity to owner
7181  //===========================================
7182  for( i = 0; i < n_proc; i++ )
7183  {
7184  ind = get_buffers( exchange_procs[i] );
7185  // Reserve space on front for size and for initial buff size
7186  remoteOwnedBuffs[ind]->reset_buffer( sizeof( int ) );
7187 
7188  result = pack_remote_handles( L1hloc[ind], L1hrem[ind], L1p[ind], buffProcs[ind], remoteOwnedBuffs[ind] );MB_CHK_SET_ERR( result, "Failed to pack remote handles" );
7189  remoteOwnedBuffs[ind]->set_stored_size();
7190 
7191  if( myDebug->get_verbosity() == 4 )
7192  {
7193  msgs.resize( msgs.size() + 1 );
7194  msgs.back() = new Buffer( *remoteOwnedBuffs[ind] );
7195  }
7196  result = send_buffer( buffProcs[ind], remoteOwnedBuffs[ind], MB_MESG_REMOTEH_SIZE, sendReqs[3 * ind],
7197  recv_remoteh_reqs[3 * ind + 2], &dum_ack_buff, incoming2 );MB_CHK_SET_ERR( result, "Failed to send remote handles" );
7198  }
7199 
7200  //===========================================
7201  // Process remote handles of my ghosteds
7202  //===========================================
7203  while( incoming2 )
7204  {
7206  success = MPI_Waitany( 3 * buffProcs.size(), &recv_remoteh_reqs[0], &ind, &status );
7207  if( MPI_SUCCESS != success )
7208  {
7209  MB_SET_ERR( MB_FAILURE, "Failed in waitany in owned entity exchange" );
7210  }
7211 
7212  // OK, received something; decrement incoming counter
7213  incoming2--;
7214 
7215  PRINT_DEBUG_RECD( status );
7216 
7217  bool done = false;
7218  unsigned int base_ind = 3 * ( ind / 3 );
7219  result = recv_buffer( MB_MESG_REMOTEH_SIZE, status, localOwnedBuffs[ind / 3], recv_remoteh_reqs[base_ind + 1],
7220  recv_remoteh_reqs[base_ind + 2], incoming2, remoteOwnedBuffs[ind / 3],
7221  sendReqs[base_ind + 1], sendReqs[base_ind + 2], done );MB_CHK_SET_ERR( result, "Failed to receive remote handles" );
7222 
7223  if( done )
7224  {
7225  // Incoming remote handles
7226  if( myDebug->get_verbosity() == 4 )
7227  {
7228  msgs.resize( msgs.size() + 1 );
7229  msgs.back() = new Buffer( *localOwnedBuffs[ind / 3] );
7230  }
7231 
7232  localOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) );
7233  result =
7234  unpack_remote_handles( buffProcs[ind / 3], localOwnedBuffs[ind / 3]->buff_ptr, L2hloc, L2hrem, L2p );MB_CHK_SET_ERR( result, "Failed to unpack remote handles" );
7235  }
7236  }
7237 
7238 #ifdef MOAB_HAVE_MPE
7239  if( myDebug->get_verbosity() == 2 )
7240  {
7241  MPE_Log_event( RHANDLES_END, procConfig.proc_rank(), "Ending remote handles." );
7242  MPE_Log_event( OWNED_END, procConfig.proc_rank(), "Ending ghost exchange (still doing checks)." );
7243  }
7244 #endif
7245 
7246  //===========================================
7247  // Wait if requested
7248  //===========================================
7249  if( wait_all )
7250  {
7251  if( myDebug->get_verbosity() == 5 )
7252  {
7253  success = MPI_Barrier( procConfig.proc_comm() );
7254  }
7255  else
7256  {
7257  MPI_Status mult_status[3 * MAX_SHARING_PROCS];
7258  success = MPI_Waitall( 3 * buffProcs.size(), &recv_remoteh_reqs[0], mult_status );
7259  if( MPI_SUCCESS == success ) success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status );
7260  }
7261  if( MPI_SUCCESS != success )
7262  {
7263  MB_SET_ERR( MB_FAILURE, "Failed in waitall in owned entity exchange" );
7264  }
7265  }
7266 
7267 #ifndef NDEBUG
7268  result = check_sent_ents( allsent );MB_CHK_SET_ERR( result, "Failed check on shared entities" );
7269 #endif
7270  myDebug->tprintf( 1, "Exiting exchange_owned_mesh\n" );
7271 
7272  return MB_SUCCESS;
7273 }

References add_verts(), assign_entities_part(), buffProcs, check_sent_ents(), moab::Range::compactness(), moab::Range::empty(), moab::TupleList::enableWriteAccess(), ErrorCode, filter_pstatus(), get_buffers(), moab::TupleList::get_n(), moab::DebugOutput::get_verbosity(), moab::TupleList::inc_n(), INITIAL_BUFF_SIZE, moab::TupleList::initialize(), localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_ENTS_SIZE, moab::MB_MESG_REMOTEH_SIZE, MB_SET_ERR, MB_SUCCESS, moab::Range::merge(), MPE_Log_event, moab::msgs, myDebug, pack_buffer(), pack_remote_handles(), print_buffer(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_AND, PSTATUS_SHARED, recv_buffer(), remoteOwnedBuffs, remove_entities_part(), moab::TupleList::buffer::reset(), moab::TupleList::reset(), reset_all_buffers(), send_buffer(), sendReqs, moab::Range::size(), size(), moab::TupleList::sort(), moab::subtract(), moab::DebugOutput::tprintf(), unpack_buffer(), unpack_remote_handles(), moab::TupleList::vi_wr, and moab::TupleList::vul_wr.

Referenced by exchange_owned_meshs().

◆ exchange_owned_meshs()

ErrorCode moab::ParallelComm::exchange_owned_meshs ( std::vector< unsigned int > &  exchange_procs,
std::vector< Range * > &  exchange_ents,
std::vector< MPI_Request > &  recv_ent_reqs,
std::vector< MPI_Request > &  recv_remoteh_reqs,
bool  store_remote_handles,
bool  wait_all = true,
bool  migrate = false,
int  dim = 0 
)

Exchange owned mesh for input mesh entities and sets This function should be called collectively over the communicator for this ParallelComm. If this version is called, all shared exchanged entities should have a value for this tag (or the tag should have a default value).

Parameters
exchange_procsprocessor vector exchanged
exchange_entsexchanged entities for each processors
migrateif the owner if entities are changed or not

Definition at line 6842 of file ParallelComm.cpp.

6850 {
6851  // Filter out entities already shared with destination
6852  // Exchange twice for entities and sets
6853  ErrorCode result;
6854  std::vector< unsigned int > exchange_procs_sets;
6855  std::vector< Range* > exchange_sets;
6856  int n_proc = exchange_procs.size();
6857  for( int i = 0; i < n_proc; i++ )
6858  {
6859  Range set_range = exchange_ents[i]->subset_by_type( MBENTITYSET );
6860  *exchange_ents[i] = subtract( *exchange_ents[i], set_range );
6861  Range* tmp_range = new Range( set_range );
6862  exchange_sets.push_back( tmp_range );
6863  exchange_procs_sets.push_back( exchange_procs[i] );
6864  }
6865 
6866  if( dim == 2 )
6867  {
6868  // Exchange entities first
6869  result = exchange_owned_mesh( exchange_procs, exchange_ents, recvReqs, recvRemotehReqs, true,
6870  store_remote_handles, wait_all, migrate );MB_CHK_SET_ERR( result, "Failed to exchange owned mesh entities" );
6871 
6872  // Exchange sets
6873  result = exchange_owned_mesh( exchange_procs_sets, exchange_sets, recvReqs, recvRemotehReqs, false,
6874  store_remote_handles, wait_all, migrate );
6875  }
6876  else
6877  {
6878  // Exchange entities first
6879  result = exchange_owned_mesh( exchange_procs, exchange_ents, recv_ent_reqs, recv_remoteh_reqs, false,
6880  store_remote_handles, wait_all, migrate );MB_CHK_SET_ERR( result, "Failed to exchange owned mesh entities" );
6881 
6882  // Exchange sets
6883  result = exchange_owned_mesh( exchange_procs_sets, exchange_sets, recv_ent_reqs, recv_remoteh_reqs, false,
6884  store_remote_handles, wait_all, migrate );MB_CHK_SET_ERR( result, "Failed to exchange owned mesh sets" );
6885  }
6886 
6887  for( int i = 0; i < n_proc; i++ )
6888  delete exchange_sets[i];
6889 
6890  // Build up the list of shared entities
6891  std::map< std::vector< int >, std::vector< EntityHandle > > proc_nvecs;
6892  int procs[MAX_SHARING_PROCS];
6894  int nprocs;
6895  unsigned char pstat;
6896  for( std::set< EntityHandle >::iterator vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit )
6897  {
6898  if( mbImpl->dimension_from_handle( *vit ) > 2 ) continue;
6899  result = get_sharing_data( *vit, procs, handles, pstat, nprocs );MB_CHK_SET_ERR( result, "Failed to get sharing data in exchange_owned_meshs" );
6900  std::sort( procs, procs + nprocs );
6901  std::vector< int > tmp_procs( procs, procs + nprocs );
6902  assert( tmp_procs.size() != 2 );
6903  proc_nvecs[tmp_procs].push_back( *vit );
6904  }
6905 
6906  // Create interface sets from shared entities
6907  result = create_interface_sets( proc_nvecs );MB_CHK_SET_ERR( result, "Failed to create interface sets" );
6908 
6909  return MB_SUCCESS;
6910 }

References create_interface_sets(), dim, moab::Interface::dimension_from_handle(), ErrorCode, exchange_owned_mesh(), get_sharing_data(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, MBENTITYSET, mbImpl, recvRemotehReqs, recvReqs, sharedEnts, moab::Range::subset_by_type(), and moab::subtract().

◆ exchange_tags() [1/3]

ErrorCode moab::ParallelComm::exchange_tags ( const char *  tag_name,
const Range entities 
)
inline

Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective.

Parameters
tag_nameName of tag to be exchanged
entitiesEntities for which tags are exchanged

Definition at line 1589 of file ParallelComm.hpp.

1590 {
1591  // get the tag handle
1592  std::vector< Tag > tags( 1 );
1593  ErrorCode result = mbImpl->tag_get_handle( tag_name, 0, MB_TYPE_OPAQUE, tags[0], MB_TAG_ANY );
1594  if( MB_SUCCESS != result )
1595  return result;
1596  else if( !tags[0] )
1597  return MB_TAG_NOT_FOUND;
1598 
1599  return exchange_tags( tags, tags, entities );
1600 }

References entities, ErrorCode, exchange_tags(), MB_SUCCESS, MB_TAG_ANY, MB_TAG_NOT_FOUND, MB_TYPE_OPAQUE, mbImpl, and moab::Interface::tag_get_handle().

◆ exchange_tags() [2/3]

ErrorCode moab::ParallelComm::exchange_tags ( const std::vector< Tag > &  src_tags,
const std::vector< Tag > &  dst_tags,
const Range entities 
)

Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If this version is called, all ghosted/shared entities should have a value for this tag (or the tag should have a default value). If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective.

Parameters
src_tagsVector of tag handles to be exchanged
dst_tagsTag handles to store the tags on the non-owning procs
entitiesEntities for which tags are exchanged

Definition at line 7526 of file ParallelComm.cpp.

7529 {
7530  ErrorCode result;
7531  int success;
7532 
7533  myDebug->tprintf( 1, "Entering exchange_tags\n" );
7534 
7535  // Get all procs interfacing to this proc
7536  std::set< unsigned int > exch_procs;
7537  result = get_comm_procs( exch_procs );
7538 
7539  // Post ghost irecv's for all interface procs
7540  // Index requests the same as buffer/sharing procs indices
7541  std::vector< MPI_Request > recv_tag_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7542  // sent_ack_reqs(buffProcs.size(), MPI_REQUEST_NULL);
7543  std::vector< unsigned int >::iterator sit;
7544  int ind;
7545 
7547  int incoming = 0;
7548 
7549  for( ind = 0, sit = buffProcs.begin(); sit != buffProcs.end(); ++sit, ind++ )
7550  {
7551  incoming++;
7553  MB_MESG_TAGS_SIZE, incoming );
7554 
7555  success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, *sit,
7556  MB_MESG_TAGS_SIZE, procConfig.proc_comm(), &recv_tag_reqs[3 * ind] );
7557  if( success != MPI_SUCCESS )
7558  {
7559  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in ghost exchange" );
7560  }
7561  }
7562 
7563  // Pack and send tags from this proc to others
7564  // Make sendReqs vector to simplify initialization
7565  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7566 
7567  // Take all shared entities if incoming list is empty
7568  Range entities;
7569  if( entities_in.empty() )
7570  std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( entities ) );
7571  else
7572  entities = entities_in;
7573 
7574  int dum_ack_buff;
7575 
7576  for( ind = 0, sit = buffProcs.begin(); sit != buffProcs.end(); ++sit, ind++ )
7577  {
7578  Range tag_ents = entities;
7579 
7580  // Get ents shared by proc *sit
7581  result = filter_pstatus( tag_ents, PSTATUS_SHARED, PSTATUS_AND, *sit );MB_CHK_SET_ERR( result, "Failed pstatus AND check" );
7582 
7583  // Remote nonowned entities
7584  if( !tag_ents.empty() )
7585  {
7586  result = filter_pstatus( tag_ents, PSTATUS_NOT_OWNED, PSTATUS_NOT );MB_CHK_SET_ERR( result, "Failed pstatus NOT check" );
7587  }
7588 
7589  // Pack-send; this also posts receives if store_remote_handles is true
7590  std::vector< Range > tag_ranges;
7591  for( std::vector< Tag >::const_iterator vit = src_tags.begin(); vit != src_tags.end(); ++vit )
7592  {
7593  const void* ptr;
7594  int sz;
7595  if( mbImpl->tag_get_default_value( *vit, ptr, sz ) != MB_SUCCESS )
7596  {
7597  Range tagged_ents;
7598  mbImpl->get_entities_by_type_and_tag( 0, MBMAXTYPE, &*vit, 0, 1, tagged_ents );
7599  tag_ranges.push_back( intersect( tag_ents, tagged_ents ) );
7600  }
7601  else
7602  {
7603  tag_ranges.push_back( tag_ents );
7604  }
7605  }
7606 
7607  // Pack the data
7608  // Reserve space on front for size and for initial buff size
7609  localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
7610 
7611  result = pack_tags( tag_ents, src_tags, dst_tags, tag_ranges, localOwnedBuffs[ind], true, *sit );MB_CHK_SET_ERR( result, "Failed to count buffer in pack_send_tag" );
7612 
7613  // Now send it
7614  result = send_buffer( *sit, localOwnedBuffs[ind], MB_MESG_TAGS_SIZE, sendReqs[3 * ind],
7615  recv_tag_reqs[3 * ind + 2], &dum_ack_buff, incoming );MB_CHK_SET_ERR( result, "Failed to send buffer" );
7616  }
7617 
7618  // Receive/unpack tags
7619  while( incoming )
7620  {
7621  MPI_Status status;
7622  int index_in_recv_requests;
7624  success = MPI_Waitany( 3 * buffProcs.size(), &recv_tag_reqs[0], &index_in_recv_requests, &status );
7625  if( MPI_SUCCESS != success )
7626  {
7627  MB_SET_ERR( MB_FAILURE, "Failed in waitany in tag exchange" );
7628  }
7629  // Processor index in the list is divided by 3
7630  ind = index_in_recv_requests / 3;
7631 
7632  PRINT_DEBUG_RECD( status );
7633 
7634  // OK, received something; decrement incoming counter
7635  incoming--;
7636 
7637  bool done = false;
7638  std::vector< EntityHandle > dum_vec;
7639  result = recv_buffer( MB_MESG_TAGS_SIZE, status, remoteOwnedBuffs[ind],
7640  recv_tag_reqs[3 * ind + 1], // This is for receiving the second message
7641  recv_tag_reqs[3 * ind + 2], // This would be for ack, but it is not
7642  // used; consider removing it
7643  incoming, localOwnedBuffs[ind],
7644  sendReqs[3 * ind + 1], // Send request for sending the second message
7645  sendReqs[3 * ind + 2], // This is for sending the ack
7646  done );MB_CHK_SET_ERR( result, "Failed to resize recv buffer" );
7647  if( done )
7648  {
7649  remoteOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
7650  result = unpack_tags( remoteOwnedBuffs[ind]->buff_ptr, dum_vec, true, buffProcs[ind] );MB_CHK_SET_ERR( result, "Failed to recv-unpack-tag message" );
7651  }
7652  }
7653 
7654  // OK, now wait
7655  if( myDebug->get_verbosity() == 5 )
7656  {
7657  success = MPI_Barrier( procConfig.proc_comm() );
7658  }
7659  else
7660  {
7661  MPI_Status status[3 * MAX_SHARING_PROCS];
7662  success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], status );
7663  }
7664  if( MPI_SUCCESS != success )
7665  {
7666  MB_SET_ERR( MB_FAILURE, "Failure in waitall in tag exchange" );
7667  }
7668 
7669  // If source tag is not equal to destination tag, then
7670  // do local copy for owned entities (communicate w/ self)
7671  assert( src_tags.size() == dst_tags.size() );
7672  if( src_tags != dst_tags )
7673  {
7674  std::vector< unsigned char > data;
7675  Range owned_ents;
7676  if( entities_in.empty() )
7677  std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( entities ) );
7678  else
7679  owned_ents = entities_in;
7680  result = filter_pstatus( owned_ents, PSTATUS_NOT_OWNED, PSTATUS_NOT );MB_CHK_SET_ERR( result, "Failure to get subset of owned entities" );
7681 
7682  if( !owned_ents.empty() )
7683  { // Check this here, otherwise we get
7684  // Unexpected results from get_entities_by_type_and_tag w/ Interface::INTERSECT
7685  for( size_t i = 0; i < src_tags.size(); i++ )
7686  {
7687  if( src_tags[i] == dst_tags[i] ) continue;
7688 
7689  Range tagged_ents( owned_ents );
7690  result = mbImpl->get_entities_by_type_and_tag( 0, MBMAXTYPE, &src_tags[0], 0, 1, tagged_ents,
7691  Interface::INTERSECT );MB_CHK_SET_ERR( result, "get_entities_by_type_and_tag(type == MBMAXTYPE) failed" );
7692 
7693  int sz, size2;
7694  result = mbImpl->tag_get_bytes( src_tags[i], sz );MB_CHK_SET_ERR( result, "tag_get_size failed" );
7695  result = mbImpl->tag_get_bytes( dst_tags[i], size2 );MB_CHK_SET_ERR( result, "tag_get_size failed" );
7696  if( sz != size2 )
7697  {
7698  MB_SET_ERR( MB_FAILURE, "tag sizes don't match" );
7699  }
7700 
7701  data.resize( sz * tagged_ents.size() );
7702  result = mbImpl->tag_get_data( src_tags[i], tagged_ents, &data[0] );MB_CHK_SET_ERR( result, "tag_get_data failed" );
7703  result = mbImpl->tag_set_data( dst_tags[i], tagged_ents, &data[0] );MB_CHK_SET_ERR( result, "tag_set_data failed" );
7704  }
7705  }
7706  }
7707 
7708  myDebug->tprintf( 1, "Exiting exchange_tags" );
7709 
7710  return MB_SUCCESS;
7711 }

References buffProcs, moab::Range::empty(), entities, ErrorCode, filter_pstatus(), get_comm_procs(), moab::Interface::get_entities_by_type_and_tag(), moab::DebugOutput::get_verbosity(), INITIAL_BUFF_SIZE, moab::intersect(), moab::Interface::INTERSECT, localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_TAGS_SIZE, MB_SET_ERR, MB_SUCCESS, mbImpl, MBMAXTYPE, myDebug, pack_tags(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_AND, PSTATUS_NOT, PSTATUS_NOT_OWNED, PSTATUS_SHARED, recv_buffer(), remoteOwnedBuffs, reset_all_buffers(), send_buffer(), sendReqs, sharedEnts, moab::Range::size(), moab::Interface::tag_get_bytes(), moab::Interface::tag_get_data(), moab::Interface::tag_get_default_value(), moab::Interface::tag_set_data(), moab::DebugOutput::tprintf(), and unpack_tags().

Referenced by assign_global_ids(), moab::WriteHDF5Parallel::exchange_file_ids(), moab::NestedRefine::exchange_ghosts(), exchange_tags(), iMOAB_SynchronizeTags(), main(), perform_laplacian_smoothing(), perform_lloyd_relaxation(), and moab::LloydSmoother::perform_smooth().

◆ exchange_tags() [3/3]

ErrorCode moab::ParallelComm::exchange_tags ( Tag  tagh,
const Range entities 
)
inline

Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective.

Parameters
taghHandle of tag to be exchanged
entitiesEntities for which tags are exchanged

Definition at line 1602 of file ParallelComm.hpp.

1603 {
1604  // get the tag handle
1605  std::vector< Tag > tags;
1606  tags.push_back( tagh );
1607 
1608  return exchange_tags( tags, tags, entities );
1609 }

References entities, and exchange_tags().

◆ filter_pstatus()

ErrorCode moab::ParallelComm::filter_pstatus ( Range ents,
const unsigned char  pstatus_val,
const unsigned char  op,
int  to_proc = -1,
Range returned_ents = NULL 
)

Filter the entities by pstatus tag. op is one of PSTATUS_ AND, OR, NOT; an entity is output if: AND: all bits set in pstatus_val are also set on entity OR: any bits set in pstatus_val also set on entity NOT: any bits set in pstatus_val are not set on entity

Results returned in input list, unless result_ents is passed in non-null, in which case results are returned in result_ents.

If ents is passed in empty, filter is done on shared entities in this pcomm instance, i.e. contents of sharedEnts.

Parameters
entsInput entities to filter
pstatus_valpstatus value to which entities are compared
opBitwise operation performed between pstatus values
to_procIf non-negative and PSTATUS_SHARED is set on pstatus_val, only entities shared with to_proc are returned
result_entsIf non-null, results of filter are put in the pointed-to range
Examples
LaplacianSmoother.cpp.

Definition at line 5577 of file ParallelComm.cpp.

5582 {
5583  Range tmp_ents;
5584 
5585  // assert(!ents.empty());
5586  if( ents.empty() )
5587  {
5588  if( returned_ents ) returned_ents->clear();
5589  return MB_SUCCESS;
5590  }
5591 
5592  // Put into tmp_ents any entities which are not owned locally or
5593  // who are already shared with to_proc
5594  std::vector< unsigned char > shared_flags( ents.size() ), shared_flags2;
5595  ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), ents, &shared_flags[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus flag" );
5596  Range::const_iterator rit, hint = tmp_ents.begin();
5597  ;
5598  int i;
5599  if( op == PSTATUS_OR )
5600  {
5601  for( rit = ents.begin(), i = 0; rit != ents.end(); ++rit, i++ )
5602  {
5603  if( ( ( shared_flags[i] & ~pstat ) ^ shared_flags[i] ) & pstat )
5604  {
5605  hint = tmp_ents.insert( hint, *rit );
5606  if( -1 != to_proc ) shared_flags2.push_back( shared_flags[i] );
5607  }
5608  }
5609  }
5610  else if( op == PSTATUS_AND )
5611  {
5612  for( rit = ents.begin(), i = 0; rit != ents.end(); ++rit, i++ )
5613  {
5614  if( ( shared_flags[i] & pstat ) == pstat )
5615  {
5616  hint = tmp_ents.insert( hint, *rit );
5617  if( -1 != to_proc ) shared_flags2.push_back( shared_flags[i] );
5618  }
5619  }
5620  }
5621  else if( op == PSTATUS_NOT )
5622  {
5623  for( rit = ents.begin(), i = 0; rit != ents.end(); ++rit, i++ )
5624  {
5625  if( !( shared_flags[i] & pstat ) )
5626  {
5627  hint = tmp_ents.insert( hint, *rit );
5628  if( -1 != to_proc ) shared_flags2.push_back( shared_flags[i] );
5629  }
5630  }
5631  }
5632  else
5633  {
5634  assert( false );
5635  return MB_FAILURE;
5636  }
5637 
5638  if( -1 != to_proc )
5639  {
5640  int sharing_procs[MAX_SHARING_PROCS];
5641  std::fill( sharing_procs, sharing_procs + MAX_SHARING_PROCS, -1 );
5642  Range tmp_ents2;
5643  hint = tmp_ents2.begin();
5644 
5645  for( rit = tmp_ents.begin(), i = 0; rit != tmp_ents.end(); ++rit, i++ )
5646  {
5647  // We need to check sharing procs
5648  if( shared_flags2[i] & PSTATUS_MULTISHARED )
5649  {
5650  result = mbImpl->tag_get_data( sharedps_tag(), &( *rit ), 1, sharing_procs );MB_CHK_SET_ERR( result, "Failed to get sharedps tag" );
5651  assert( -1 != sharing_procs[0] );
5652  for( unsigned int j = 0; j < MAX_SHARING_PROCS; j++ )
5653  {
5654  // If to_proc shares this entity, add it to list
5655  if( sharing_procs[j] == to_proc )
5656  {
5657  hint = tmp_ents2.insert( hint, *rit );
5658  }
5659  else if( -1 == sharing_procs[j] )
5660  break;
5661 
5662  sharing_procs[j] = -1;
5663  }
5664  }
5665  else if( shared_flags2[i] & PSTATUS_SHARED )
5666  {
5667  result = mbImpl->tag_get_data( sharedp_tag(), &( *rit ), 1, sharing_procs );MB_CHK_SET_ERR( result, "Failed to get sharedp tag" );
5668  assert( -1 != sharing_procs[0] );
5669  if( sharing_procs[0] == to_proc ) hint = tmp_ents2.insert( hint, *rit );
5670  sharing_procs[0] = -1;
5671  }
5672  else
5673  assert( "should never get here" && false );
5674  }
5675 
5676  tmp_ents.swap( tmp_ents2 );
5677  }
5678 
5679  if( returned_ents )
5680  returned_ents->swap( tmp_ents );
5681  else
5682  ents.swap( tmp_ents );
5683 
5684  return MB_SUCCESS;
5685 }

References moab::Range::begin(), moab::Range::clear(), moab::Range::empty(), moab::Range::end(), ErrorCode, moab::Range::insert(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, PSTATUS_AND, PSTATUS_MULTISHARED, PSTATUS_NOT, PSTATUS_OR, PSTATUS_SHARED, pstatus_tag(), sharedp_tag(), sharedps_tag(), moab::Range::size(), moab::Range::swap(), and moab::Interface::tag_get_data().

Referenced by moab::NCWriteGCRM::collect_mesh_info(), moab::ScdNCWriteHelper::collect_mesh_info(), moab::NCWriteHOMME::collect_mesh_info(), moab::NCWriteMPAS::collect_mesh_info(), moab::ScdNCHelper::create_quad_coordinate_tag(), moab::WriteHDF5Parallel::exchange_file_ids(), exchange_owned_mesh(), exchange_tags(), moab::WriteHDF5Parallel::gather_interface_meshes(), get_ghosted_entities(), get_max_volume(), get_sent_ents(), get_shared_entities(), hcFilter(), iMOAB_UpdateMeshInfo(), moab::HalfFacetRep::initialize(), moab::LloydSmoother::initialize(), moab::HiReconstruction::initialize(), laplacianFilter(), moab::ReadParallel::load_file(), main(), perform_laplacian_smoothing(), perform_lloyd_relaxation(), moab::LloydSmoother::perform_smooth(), moab::ScdNCHelper::read_scd_variables_to_nonset_allocate(), moab::NCHelperGCRM::read_ucd_variables_to_nonset_allocate(), moab::NCHelperMPAS::read_ucd_variables_to_nonset_allocate(), reduce_tags(), resolve_shared_sets(), send_entities(), and settle_intersection_points().

◆ find_existing_entity()

ErrorCode moab::ParallelComm::find_existing_entity ( const bool  is_iface,
const int  owner_p,
const EntityHandle  owner_h,
const int  num_ents,
const EntityHandle connect,
const int  num_connect,
const EntityType  this_type,
std::vector< EntityHandle > &  L2hloc,
std::vector< EntityHandle > &  L2hrem,
std::vector< unsigned int > &  L2p,
EntityHandle new_h 
)
private

given connectivity and type, find an existing entity, if there is one

Definition at line 3047 of file ParallelComm.cpp.

3058 {
3059  new_h = 0;
3060  if( !is_iface && num_ps > 2 )
3061  {
3062  for( unsigned int i = 0; i < L2hrem.size(); i++ )
3063  {
3064  if( L2hrem[i] == owner_h && owner_p == (int)L2p[i] )
3065  {
3066  new_h = L2hloc[i];
3067  return MB_SUCCESS;
3068  }
3069  }
3070  }
3071 
3072  // If we got here and it's a vertex, we don't need to look further
3073  if( MBVERTEX == this_type || !connect || !num_connect ) return MB_SUCCESS;
3074 
3075  Range tmp_range;
3076  ErrorCode result = mbImpl->get_adjacencies( connect, num_connect, CN::Dimension( this_type ), false, tmp_range );MB_CHK_SET_ERR( result, "Failed to get existing entity" );
3077  if( !tmp_range.empty() )
3078  {
3079  // Found a corresponding entity - return target
3080  new_h = *tmp_range.begin();
3081  }
3082  else
3083  {
3084  new_h = 0;
3085  }
3086 
3087  return MB_SUCCESS;
3088 }

References moab::Range::begin(), moab::CN::Dimension(), moab::Range::empty(), ErrorCode, moab::Interface::get_adjacencies(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, and MBVERTEX.

Referenced by unpack_entities(), and unpack_remote_handles().

◆ gather_data()

ErrorCode moab::ParallelComm::gather_data ( Range gather_ents,
Tag tag_handle,
Tag  id_tag = 0,
EntityHandle  gather_set = 0,
int  root_proc_rank = 0 
)

Definition at line 8914 of file ParallelComm.cpp.

8919 {
8920  int dim = mbImpl->dimension_from_handle( *gather_ents.begin() );
8921  int bytes_per_tag = 0;
8922  ErrorCode rval = mbImpl->tag_get_bytes( tag_handle, bytes_per_tag );
8923  if( rval != MB_SUCCESS ) return rval;
8924 
8925  int sz_buffer = sizeof( int ) + gather_ents.size() * ( sizeof( int ) + bytes_per_tag );
8926  void* senddata = malloc( sz_buffer );
8927  ( (int*)senddata )[0] = (int)gather_ents.size();
8928  int* ptr_int = (int*)senddata + 1;
8929  rval = mbImpl->tag_get_data( id_tag, gather_ents, (void*)ptr_int );
8930  if( rval != MB_SUCCESS ) return rval;
8931  ptr_int = (int*)( senddata ) + 1 + gather_ents.size();
8932  rval = mbImpl->tag_get_data( tag_handle, gather_ents, (void*)ptr_int );
8933  if( rval != MB_SUCCESS ) return rval;
8934  std::vector< int > displs( proc_config().proc_size(), 0 );
8935  MPI_Gather( &sz_buffer, 1, MPI_INT, &displs[0], 1, MPI_INT, root_proc_rank, comm() );
8936  std::vector< int > recvcnts( proc_config().proc_size(), 0 );
8937  std::copy( displs.begin(), displs.end(), recvcnts.begin() );
8938  std::partial_sum( displs.begin(), displs.end(), displs.begin() );
8939  std::vector< int >::iterator lastM1 = displs.end() - 1;
8940  std::copy_backward( displs.begin(), lastM1, displs.end() );
8941  // std::copy_backward(displs.begin(), --displs.end(), displs.end());
8942  displs[0] = 0;
8943 
8944  if( (int)rank() != root_proc_rank )
8945  MPI_Gatherv( senddata, sz_buffer, MPI_BYTE, NULL, NULL, NULL, MPI_BYTE, root_proc_rank, comm() );
8946  else
8947  {
8948  Range gents;
8949  mbImpl->get_entities_by_dimension( gather_set, dim, gents );
8950  int recvbuffsz = gents.size() * ( bytes_per_tag + sizeof( int ) ) + proc_config().proc_size() * sizeof( int );
8951  void* recvbuf = malloc( recvbuffsz );
8952  MPI_Gatherv( senddata, sz_buffer, MPI_BYTE, recvbuf, &recvcnts[0], &displs[0], MPI_BYTE, root_proc_rank,
8953  comm() );
8954 
8955  void* gvals = NULL;
8956 
8957  // Test whether gents has multiple sequences
8958  bool multiple_sequences = false;
8959  if( gents.psize() > 1 )
8960  multiple_sequences = true;
8961  else
8962  {
8963  int count;
8964  rval = mbImpl->tag_iterate( tag_handle, gents.begin(), gents.end(), count, gvals );
8965  assert( NULL != gvals );
8966  assert( count > 0 );
8967  if( (size_t)count != gents.size() )
8968  {
8969  multiple_sequences = true;
8970  gvals = NULL;
8971  }
8972  }
8973 
8974  // If gents has multiple sequences, create a temp buffer for gathered values
8975  if( multiple_sequences )
8976  {
8977  gvals = malloc( gents.size() * bytes_per_tag );
8978  assert( NULL != gvals );
8979  }
8980 
8981  for( int i = 0; i != (int)size(); i++ )
8982  {
8983  int numents = *(int*)( ( (char*)recvbuf ) + displs[i] );
8984  int* id_ptr = (int*)( ( (char*)recvbuf ) + displs[i] + sizeof( int ) );
8985  char* val_ptr = (char*)( id_ptr + numents );
8986  for( int j = 0; j != numents; j++ )
8987  {
8988  int idx = id_ptr[j];
8989  memcpy( (char*)gvals + ( idx - 1 ) * bytes_per_tag, val_ptr + j * bytes_per_tag, bytes_per_tag );
8990  }
8991  }
8992 
8993  // Free the receive buffer
8994  free( recvbuf );
8995 
8996  // If gents has multiple sequences, copy tag data (stored in the temp buffer) to each
8997  // sequence separately
8998  if( multiple_sequences )
8999  {
9000  Range::iterator iter = gents.begin();
9001  size_t start_idx = 0;
9002  while( iter != gents.end() )
9003  {
9004  int count;
9005  void* ptr;
9006  rval = mbImpl->tag_iterate( tag_handle, iter, gents.end(), count, ptr );
9007  assert( NULL != ptr );
9008  assert( count > 0 );
9009  memcpy( (char*)ptr, (char*)gvals + start_idx * bytes_per_tag, bytes_per_tag * count );
9010 
9011  iter += count;
9012  start_idx += count;
9013  }
9014  assert( start_idx == gents.size() );
9015 
9016  // Free the temp buffer
9017  free( gvals );
9018  }
9019  }
9020 
9021  // Free the send data
9022  free( senddata );
9023 
9024  return MB_SUCCESS;
9025 }

References moab::Range::begin(), comm(), dim, moab::Interface::dimension_from_handle(), moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_dimension(), MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_size(), moab::Range::psize(), rank(), moab::Range::size(), size(), moab::Interface::tag_get_bytes(), moab::Interface::tag_get_data(), and moab::Interface::tag_iterate().

◆ get_all_pcomm()

ErrorCode moab::ParallelComm::get_all_pcomm ( Interface impl,
std::vector< ParallelComm * > &  list 
)
static

Definition at line 8023 of file ParallelComm.cpp.

8024 {
8025  Tag pc_tag = pcomm_tag( impl, false );
8026  if( 0 == pc_tag ) return MB_TAG_NOT_FOUND;
8027 
8028  const EntityHandle root = 0;
8029  ParallelComm* pc_array[MAX_SHARING_PROCS];
8030  ErrorCode rval = impl->tag_get_data( pc_tag, &root, 1, pc_array );
8031  if( MB_SUCCESS != rval ) return rval;
8032 
8033  for( int i = 0; i < MAX_SHARING_PROCS; i++ )
8034  {
8035  if( pc_array[i] ) list.push_back( pc_array[i] );
8036  }
8037 
8038  return MB_SUCCESS;
8039 }

References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, MB_TAG_NOT_FOUND, pcomm_tag(), and moab::Interface::tag_get_data().

Referenced by moab::Core::deinitialize().

◆ get_buffers()

int moab::ParallelComm::get_buffers ( int  to_proc,
bool *  is_new = NULL 
)

get (and possibly allocate) buffers for messages to/from to_proc; returns index of to_proc in buffProcs vector; if is_new is non-NULL, sets to whether new buffer was allocated PUBLIC ONLY FOR TESTING!

Definition at line 514 of file ParallelComm.cpp.

515 {
516  int ind = -1;
517  std::vector< unsigned int >::iterator vit = std::find( buffProcs.begin(), buffProcs.end(), to_proc );
518  if( vit == buffProcs.end() )
519  {
520  assert( "shouldn't need buffer to myself" && to_proc != (int)procConfig.proc_rank() );
521  ind = buffProcs.size();
522  buffProcs.push_back( (unsigned int)to_proc );
523  localOwnedBuffs.push_back( new Buffer( INITIAL_BUFF_SIZE ) );
524  remoteOwnedBuffs.push_back( new Buffer( INITIAL_BUFF_SIZE ) );
525  if( is_new ) *is_new = true;
526  }
527  else
528  {
529  ind = vit - buffProcs.begin();
530  if( is_new ) *is_new = false;
531  }
532  assert( ind < MAX_SHARING_PROCS );
533  return ind;
534 }

References buffProcs, INITIAL_BUFF_SIZE, localOwnedBuffs, MAX_SHARING_PROCS, moab::ProcConfig::proc_rank(), procConfig, and remoteOwnedBuffs.

Referenced by check_all_shared_handles(), correct_thin_ghost_layers(), exchange_owned_mesh(), get_interface_procs(), pack_shared_handles(), post_irecv(), recv_entities(), recv_messages(), recv_remote_handle_messages(), send_entities(), send_recv_entities(), moab::ScdInterface::tag_shared_vertices(), and unpack_entities().

◆ get_comm_procs()

ErrorCode moab::ParallelComm::get_comm_procs ( std::set< unsigned int > &  procs)
inline

get processors with which this processor communicates

Definition at line 1633 of file ParallelComm.hpp.

1634 {
1635  ErrorCode result = get_interface_procs( procs );
1636  if( MB_SUCCESS != result ) return result;
1637 
1638  std::copy( buffProcs.begin(), buffProcs.end(), std::inserter( procs, procs.begin() ) );
1639 
1640  return MB_SUCCESS;
1641 }

References buffProcs, ErrorCode, get_interface_procs(), and MB_SUCCESS.

Referenced by exchange_tags(), reduce_tags(), and settle_intersection_points().

◆ get_debug_verbosity()

int moab::ParallelComm::get_debug_verbosity ( )

get the verbosity level of output from this pcomm

Definition at line 8872 of file ParallelComm.cpp.

8873 {
8874  return myDebug->get_verbosity();
8875 }

References moab::DebugOutput::get_verbosity(), and myDebug.

Referenced by augment_default_sets_with_ghosts(), moab::ScdInterface::construct_box(), and moab::ScdInterface::tag_shared_vertices().

◆ get_entityset_local_handle()

ErrorCode moab::ParallelComm::get_entityset_local_handle ( unsigned  owning_rank,
EntityHandle  remote_handle,
EntityHandle local_handle 
) const

Given set owner and handle on owner, find local set handle.

Definition at line 8892 of file ParallelComm.cpp.

8895 {
8896  return sharedSetData->get_local_handle( owning_rank, remote_handle, local_handle );
8897 }

References moab::SharedSetData::get_local_handle(), and sharedSetData.

Referenced by moab::WriteHDF5Parallel::communicate_shared_set_ids().

◆ get_entityset_owner()

ErrorCode moab::ParallelComm::get_entityset_owner ( EntityHandle  entity_set,
unsigned &  owner_rank,
EntityHandle remote_handle = 0 
) const

Get rank of the owner of a shared set. Returns this proc if set is not shared. Optionally returns handle on owning process for shared set.

Definition at line 8882 of file ParallelComm.cpp.

8885 {
8886  if( remote_handle )
8887  return sharedSetData->get_owner( entity_set, owner_rank, *remote_handle );
8888  else
8889  return sharedSetData->get_owner( entity_set, owner_rank );
8890 }

References moab::SharedSetData::get_owner(), and sharedSetData.

Referenced by moab::WriteHDF5Parallel::communicate_shared_set_data(), moab::WriteHDF5Parallel::communicate_shared_set_ids(), and moab::WriteHDF5Parallel::print_set_sharing_data().

◆ get_entityset_owners()

ErrorCode moab::ParallelComm::get_entityset_owners ( std::vector< unsigned > &  ranks) const

Get ranks of all processes that own at least one set that is shared with this process. Will include the rank of this process if this process owns any shared set.

Definition at line 8904 of file ParallelComm.cpp.

8905 {
8906  return sharedSetData->get_owning_procs( ranks );
8907 }

References moab::SharedSetData::get_owning_procs(), and sharedSetData.

Referenced by moab::WriteHDF5Parallel::communicate_shared_set_ids().

◆ get_entityset_procs()

ErrorCode moab::ParallelComm::get_entityset_procs ( EntityHandle  entity_set,
std::vector< unsigned > &  ranks 
) const

Get array of process IDs sharing a set. Returns zero and passes back NULL if set is not shared.

Definition at line 8877 of file ParallelComm.cpp.

8878 {
8879  return sharedSetData->get_sharing_procs( set, ranks );
8880 }

References moab::SharedSetData::get_sharing_procs(), and sharedSetData.

Referenced by moab::WriteHDF5Parallel::communicate_shared_set_data(), moab::WriteHDF5Parallel::communicate_shared_set_ids(), and moab::WriteHDF5Parallel::print_set_sharing_data().

◆ get_ghosted_entities()

ErrorCode moab::ParallelComm::get_ghosted_entities ( int  bridge_dim,
int  ghost_dim,
int  to_proc,
int  num_layers,
int  addl_ents,
Range ghosted_ents 
)
private

for specified bridge/ghost dimension, to_proc, and number of layers, get the entities to be ghosted, and info on additional procs needing to communicate with to_proc

Definition at line 7437 of file ParallelComm.cpp.

7443 {
7444  // Get bridge ents on interface(s)
7445  Range from_ents;
7446  ErrorCode result = MB_SUCCESS;
7447  assert( 0 < num_layers );
7448  for( Range::iterator rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit )
7449  {
7450  if( !is_iface_proc( *rit, to_proc ) ) continue;
7451 
7452  // Get starting "from" entities
7453  if( bridge_dim == -1 )
7454  {
7455  result = mbImpl->get_entities_by_handle( *rit, from_ents );MB_CHK_SET_ERR( result, "Failed to get bridge ents in the set" );
7456  }
7457  else
7458  {
7459  result = mbImpl->get_entities_by_dimension( *rit, bridge_dim, from_ents );MB_CHK_SET_ERR( result, "Failed to get bridge ents in the set" );
7460  }
7461 
7462  // Need to get layers of bridge-adj entities
7463  if( from_ents.empty() ) continue;
7464  result =
7465  MeshTopoUtil( mbImpl ).get_bridge_adjacencies( from_ents, bridge_dim, ghost_dim, ghosted_ents, num_layers );MB_CHK_SET_ERR( result, "Failed to get bridge adjacencies" );
7466  }
7467 
7468  result = add_verts( ghosted_ents );MB_CHK_SET_ERR( result, "Failed to add verts" );
7469 
7470  if( addl_ents )
7471  {
7472  // First get the ents of ghost_dim
7473  Range tmp_ents, tmp_owned, tmp_notowned;
7474  tmp_owned = ghosted_ents.subset_by_dimension( ghost_dim );
7475  if( tmp_owned.empty() ) return result;
7476 
7477  tmp_notowned = tmp_owned;
7478 
7479  // Next, filter by pstatus; can only create adj entities for entities I own
7480  result = filter_pstatus( tmp_owned, PSTATUS_NOT_OWNED, PSTATUS_NOT, -1, &tmp_owned );MB_CHK_SET_ERR( result, "Failed to filter owned entities" );
7481 
7482  tmp_notowned -= tmp_owned;
7483 
7484  // Get edges first
7485  if( 1 == addl_ents || 3 == addl_ents )
7486  {
7487  result = mbImpl->get_adjacencies( tmp_owned, 1, true, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get edge adjacencies for owned ghost entities" );
7488  result = mbImpl->get_adjacencies( tmp_notowned, 1, false, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get edge adjacencies for notowned ghost entities" );
7489  }
7490  if( 2 == addl_ents || 3 == addl_ents )
7491  {
7492  result = mbImpl->get_adjacencies( tmp_owned, 2, true, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get face adjacencies for owned ghost entities" );
7493  result = mbImpl->get_adjacencies( tmp_notowned, 2, false, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get face adjacencies for notowned ghost entities" );
7494  }
7495 
7496  ghosted_ents.merge( tmp_ents );
7497  }
7498 
7499  return result;
7500 }

References add_verts(), moab::Range::begin(), moab::Range::empty(), moab::Range::end(), ErrorCode, filter_pstatus(), moab::Interface::get_adjacencies(), moab::MeshTopoUtil::get_bridge_adjacencies(), moab::Interface::get_entities_by_dimension(), moab::Interface::get_entities_by_handle(), interfaceSets, is_iface_proc(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::Range::merge(), PSTATUS_NOT, PSTATUS_NOT_OWNED, moab::Range::subset_by_dimension(), and moab::Interface::UNION.

Referenced by get_sent_ents().

◆ get_global_part_count()

ErrorCode moab::ParallelComm::get_global_part_count ( int &  count_out) const

Definition at line 8182 of file ParallelComm.cpp.

8183 {
8184  count_out = globalPartCount;
8185  return count_out < 0 ? MB_FAILURE : MB_SUCCESS;
8186 }

References globalPartCount, and MB_SUCCESS.

◆ get_id()

int moab::ParallelComm::get_id ( ) const
inline

Get ID used to reference this PCOMM instance.

Definition at line 70 of file ParallelComm.hpp.

71  {
72  return pcommID;
73  }

References pcommID.

Referenced by iMOAB_RegisterApplication(), and DeformMeshRemap::read_file().

◆ get_iface_entities()

ErrorCode moab::ParallelComm::get_iface_entities ( int  other_proc,
int  dim,
Range iface_ents 
)

Get entities on interfaces shared with another proc.

Parameters
other_procOther proc sharing the interface
dimDimension of entities to return, -1 if all dims
iface_entsReturned entities

Definition at line 7275 of file ParallelComm.cpp.

7276 {
7277  Range iface_sets;
7278  ErrorCode result = MB_SUCCESS;
7279 
7280  for( Range::iterator rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit )
7281  {
7282  if( -1 != other_proc && !is_iface_proc( *rit, other_proc ) ) continue;
7283 
7284  if( -1 == dim )
7285  {
7286  result = mbImpl->get_entities_by_handle( *rit, iface_ents );MB_CHK_SET_ERR( result, "Failed to get entities in iface set" );
7287  }
7288  else
7289  {
7290  result = mbImpl->get_entities_by_dimension( *rit, dim, iface_ents );MB_CHK_SET_ERR( result, "Failed to get entities in iface set" );
7291  }
7292  }
7293 
7294  return MB_SUCCESS;
7295 }

References moab::Range::begin(), dim, moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_dimension(), moab::Interface::get_entities_by_handle(), interfaceSets, is_iface_proc(), MB_CHK_SET_ERR, MB_SUCCESS, and mbImpl.

Referenced by get_sent_ents().

◆ get_interface_procs()

ErrorCode moab::ParallelComm::get_interface_procs ( std::set< unsigned int > &  iface_procs,
const bool  get_buffs = false 
)

get processors with which this processor shares an interface

Get processors with which this processor communicates; sets are sorted by processor.

Definition at line 5440 of file ParallelComm.cpp.

5441 {
5442  // Make sure the sharing procs vector is empty
5443  procs_set.clear();
5444 
5445  // Pre-load vector of single-proc tag values
5446  unsigned int i, j;
5447  std::vector< int > iface_proc( interfaceSets.size() );
5448  ErrorCode result = mbImpl->tag_get_data( sharedp_tag(), interfaceSets, &iface_proc[0] );MB_CHK_SET_ERR( result, "Failed to get iface_proc for iface sets" );
5449 
5450  // Get sharing procs either from single-proc vector or by getting
5451  // multi-proc tag value
5452  int tmp_iface_procs[MAX_SHARING_PROCS];
5453  std::fill( tmp_iface_procs, tmp_iface_procs + MAX_SHARING_PROCS, -1 );
5454  Range::iterator rit;
5455  for( rit = interfaceSets.begin(), i = 0; rit != interfaceSets.end(); ++rit, i++ )
5456  {
5457  if( -1 != iface_proc[i] )
5458  {
5459  assert( iface_proc[i] != (int)procConfig.proc_rank() );
5460  procs_set.insert( (unsigned int)iface_proc[i] );
5461  }
5462  else
5463  {
5464  // Get the sharing_procs tag
5465  result = mbImpl->tag_get_data( sharedps_tag(), &( *rit ), 1, tmp_iface_procs );MB_CHK_SET_ERR( result, "Failed to get iface_procs for iface set" );
5466  for( j = 0; j < MAX_SHARING_PROCS; j++ )
5467  {
5468  if( -1 != tmp_iface_procs[j] && tmp_iface_procs[j] != (int)procConfig.proc_rank() )
5469  procs_set.insert( (unsigned int)tmp_iface_procs[j] );
5470  else if( -1 == tmp_iface_procs[j] )
5471  {
5472  std::fill( tmp_iface_procs, tmp_iface_procs + j, -1 );
5473  break;
5474  }
5475  }
5476  }
5477  }
5478 
5479  if( get_buffs )
5480  {
5481  for( std::set< unsigned int >::iterator sit = procs_set.begin(); sit != procs_set.end(); ++sit )
5482  get_buffers( *sit );
5483  }
5484 
5485  return MB_SUCCESS;
5486 }

References moab::Range::begin(), moab::Range::end(), ErrorCode, get_buffers(), interfaceSets, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::ProcConfig::proc_rank(), procConfig, sharedp_tag(), sharedps_tag(), moab::Range::size(), and moab::Interface::tag_get_data().

Referenced by get_comm_procs(), resolve_shared_ents(), and moab::ParallelMergeMesh::TagSharedElements().

◆ get_interface_sets()

ErrorCode moab::ParallelComm::get_interface_sets ( EntityHandle  part,
Range iface_sets_out,
int *  adj_part_id = 0 
)

Definition at line 8310 of file ParallelComm.cpp.

8311 {
8312  // FIXME : assumes one part per processor.
8313  // Need to store part iface sets as children to implement
8314  // this correctly.
8315  iface_sets_out = interface_sets();
8316 
8317  if( adj_part_id )
8318  {
8319  int part_ids[MAX_SHARING_PROCS], num_parts;
8320  Range::iterator i = iface_sets_out.begin();
8321  while( i != iface_sets_out.end() )
8322  {
8323  unsigned char pstat;
8324  ErrorCode rval = get_sharing_data( *i, part_ids, NULL, pstat, num_parts );
8325  if( MB_SUCCESS != rval ) return rval;
8326 
8327  if( std::find( part_ids, part_ids + num_parts, *adj_part_id ) - part_ids != num_parts )
8328  ++i;
8329  else
8330  i = iface_sets_out.erase( i );
8331  }
8332  }
8333 
8334  return MB_SUCCESS;
8335 }

References moab::Range::begin(), moab::Range::end(), moab::Range::erase(), ErrorCode, get_sharing_data(), interface_sets(), MAX_SHARING_PROCS, and MB_SUCCESS.

Referenced by get_part_neighbor_ids().

◆ get_local_handles() [1/3]

ErrorCode moab::ParallelComm::get_local_handles ( const Range remote_handles,
Range local_handles,
const std::vector< EntityHandle > &  new_ents 
)
private

same as above except puts results in range

Definition at line 3090 of file ParallelComm.cpp.

3093 {
3094  std::vector< EntityHandle > rh_vec;
3095  rh_vec.reserve( remote_handles.size() );
3096  std::copy( remote_handles.begin(), remote_handles.end(), std::back_inserter( rh_vec ) );
3097  ErrorCode result = get_local_handles( &rh_vec[0], remote_handles.size(), new_ents );
3098  std::copy( rh_vec.begin(), rh_vec.end(), range_inserter( local_handles ) );
3099  return result;
3100 }

References moab::Range::begin(), moab::Range::end(), ErrorCode, get_local_handles(), and moab::Range::size().

◆ get_local_handles() [2/3]

ErrorCode moab::ParallelComm::get_local_handles ( EntityHandle from_vec,
int  num_ents,
const Range new_ents 
)
private

goes through from_vec, and for any with type MBMAXTYPE, replaces with new_ents value at index corresponding to id of entity in from_vec

Definition at line 3102 of file ParallelComm.cpp.

3103 {
3104  std::vector< EntityHandle > tmp_ents;
3105  std::copy( new_ents.begin(), new_ents.end(), std::back_inserter( tmp_ents ) );
3106  return get_local_handles( from_vec, num_ents, tmp_ents );
3107 }

References moab::Range::begin(), and moab::Range::end().

Referenced by get_local_handles(), unpack_entities(), unpack_sets(), and unpack_tags().

◆ get_local_handles() [3/3]

ErrorCode moab::ParallelComm::get_local_handles ( EntityHandle from_vec,
int  num_ents,
const std::vector< EntityHandle > &  new_ents 
)
private

same as above except gets new_ents from vector

Definition at line 3109 of file ParallelComm.cpp.

3112 {
3113  for( int i = 0; i < num_ents; i++ )
3114  {
3115  if( TYPE_FROM_HANDLE( from_vec[i] ) == MBMAXTYPE )
3116  {
3117  assert( ID_FROM_HANDLE( from_vec[i] ) < (int)new_ents.size() );
3118  from_vec[i] = new_ents[ID_FROM_HANDLE( from_vec[i] )];
3119  }
3120  }
3121 
3122  return MB_SUCCESS;
3123 }

References moab::ID_FROM_HANDLE(), MB_SUCCESS, MBMAXTYPE, and moab::TYPE_FROM_HANDLE().

◆ get_moab()

Interface* moab::ParallelComm::get_moab ( ) const
inline

Definition at line 779 of file ParallelComm.hpp.

780  {
781  return mbImpl;
782  }

References mbImpl.

Referenced by moab::ParCommGraph::compute_partition(), exchange_ghost_cells(), moab::ParallelMergeMesh::ParallelMergeMesh(),