Mesh Oriented datABase  (version 5.5.0)
An array-based unstructured mesh library
moab::ParallelComm Class Reference

Parallel communications in MOAB. More...

#include <ParallelComm.hpp>

+ Collaboration diagram for moab::ParallelComm:

Classes

class  Buffer
 
struct  SharedEntityData
 

Public Member Functions

 ParallelComm (Interface *impl, MPI_Comm comm, int *pcomm_id_out=0)
 constructor More...
 
 ParallelComm (Interface *impl, std::vector< unsigned char > &tmp_buff, MPI_Comm comm, int *pcomm_id_out=0)
 constructor taking packed buffer, for testing More...
 
int get_id () const
 Get ID used to reference this PCOMM instance. More...
 
 ~ParallelComm ()
 destructor More...
 
ErrorCode assign_global_ids (EntityHandle this_set, const int dimension, const int start_id=1, const bool largest_dim_only=true, const bool parallel=true, const bool owned_only=false)
 assign a global id space, for largest-dimension or all entities (and in either case for vertices too) More...
 
ErrorCode assign_global_ids (Range entities[], const int dimension, const int start_id, const bool parallel, const bool owned_only)
 assign a global id space, for largest-dimension or all entities (and in either case for vertices too) More...
 
ErrorCode check_global_ids (EntityHandle this_set, const int dimension, const int start_id=1, const bool largest_dim_only=true, const bool parallel=true, const bool owned_only=false)
 check for global ids; based only on tag handle being there or not; if it's not there, create them for the specified dimensions More...
 
ErrorCode send_entities (const int to_proc, Range &orig_ents, const bool adjs, const bool tags, const bool store_remote_handles, const bool is_iface, Range &final_ents, int &incoming1, int &incoming2, TupleList &entprocs, std::vector< MPI_Request > &recv_remoteh_reqs, bool wait_all=true)
 send entities to another processor, optionally waiting until it's done More...
 
ErrorCode send_entities (std::vector< unsigned int > &send_procs, std::vector< Range * > &send_ents, int &incoming1, int &incoming2, const bool store_remote_handles)
 
ErrorCode recv_entities (const int from_proc, const bool store_remote_handles, const bool is_iface, Range &final_ents, int &incomming1, int &incoming2, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< MPI_Request > &recv_remoteh_reqs, bool wait_all=true)
 Receive entities from another processor, optionally waiting until it's done. More...
 
ErrorCode recv_entities (std::set< unsigned int > &recv_procs, int incoming1, int incoming2, const bool store_remote_handles, const bool migrate=false)
 
ErrorCode recv_messages (const int from_proc, const bool store_remote_handles, const bool is_iface, Range &final_ents, int &incoming1, int &incoming2, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< MPI_Request > &recv_remoteh_reqs)
 Receive messages from another processor in while loop. More...
 
ErrorCode recv_remote_handle_messages (const int from_proc, int &incoming2, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< MPI_Request > &recv_remoteh_reqs)
 
ErrorCode exchange_ghost_cells (int ghost_dim, int bridge_dim, int num_layers, int addl_ents, bool store_remote_handles, bool wait_all=true, EntityHandle *file_set=NULL)
 Exchange ghost cells with neighboring procs Neighboring processors are those sharing an interface with this processor. All entities of dimension ghost_dim within num_layers of interface, measured going through bridge_dim, are exchanged. See MeshTopoUtil::get_bridge_adjacencies for description of bridge adjacencies. If wait_all is false and store_remote_handles is true, MPI_Request objects are available in the sendReqs[2*MAX_SHARING_PROCS] member array, with inactive requests marked as MPI_REQUEST_NULL. If store_remote_handles or wait_all is false, this function returns after all entities have been received and processed. More...
 
ErrorCode post_irecv (std::vector< unsigned int > &exchange_procs)
 Post "MPI_Irecv" before meshing. More...
 
ErrorCode post_irecv (std::vector< unsigned int > &shared_procs, std::set< unsigned int > &recv_procs)
 
ErrorCode exchange_owned_meshs (std::vector< unsigned int > &exchange_procs, std::vector< Range * > &exchange_ents, std::vector< MPI_Request > &recv_ent_reqs, std::vector< MPI_Request > &recv_remoteh_reqs, bool store_remote_handles, bool wait_all=true, bool migrate=false, int dim=0)
 Exchange owned mesh for input mesh entities and sets This function should be called collectively over the communicator for this ParallelComm. If this version is called, all shared exchanged entities should have a value for this tag (or the tag should have a default value). More...
 
ErrorCode exchange_owned_mesh (std::vector< unsigned int > &exchange_procs, std::vector< Range * > &exchange_ents, std::vector< MPI_Request > &recv_ent_reqs, std::vector< MPI_Request > &recv_remoteh_reqs, const bool recv_posted, bool store_remote_handles, bool wait_all, bool migrate=false)
 Exchange owned mesh for input mesh entities and sets This function is called twice by exchange_owned_meshs to exchange entities before sets. More...
 
ErrorCode exchange_tags (const std::vector< Tag > &src_tags, const std::vector< Tag > &dst_tags, const Range &entities)
 Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If this version is called, all ghosted/shared entities should have a value for this tag (or the tag should have a default value). If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective. More...
 
ErrorCode exchange_tags (const char *tag_name, const Range &entities)
 Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective. More...
 
ErrorCode exchange_tags (Tag tagh, const Range &entities)
 Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective. More...
 
ErrorCode reduce_tags (const std::vector< Tag > &src_tags, const std::vector< Tag > &dst_tags, const MPI_Op mpi_op, const Range &entities)
 Perform data reduction operation for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If this version is called, all ghosted/shared entities should have a value for this tag (or the tag should have a default value). Operation is any MPI_Op, with result stored in destination tag. More...
 
ErrorCode reduce_tags (const char *tag_name, const MPI_Op mpi_op, const Range &entities)
 Perform data reduction operation for all shared and ghosted entities Same as std::vector variant except for one tag specified by name. More...
 
ErrorCode reduce_tags (Tag tag_handle, const MPI_Op mpi_op, const Range &entities)
 Perform data reduction operation for all shared and ghosted entities Same as std::vector variant except for one tag specified by handle. More...
 
ErrorCode broadcast_entities (const int from_proc, Range &entities, const bool adjacencies=false, const bool tags=true)
 Broadcast all entities resident on from_proc to other processors This function assumes remote handles are not being stored, since (usually) every processor will know about the whole mesh. More...
 
ErrorCode scatter_entities (const int from_proc, std::vector< Range > &entities, const bool adjacencies=false, const bool tags=true)
 Scatter entities on from_proc to other processors This function assumes remote handles are not being stored, since (usually) every processor will know about the whole mesh. More...
 
ErrorCode send_recv_entities (std::vector< int > &send_procs, std::vector< std::vector< int > > &msgsizes, std::vector< std::vector< EntityHandle > > &senddata, std::vector< std::vector< EntityHandle > > &recvdata)
 Send and receives data from a set of processors. More...
 
ErrorCode update_remote_data (EntityHandle entity, std::vector< int > &procs, std::vector< EntityHandle > &handles)
 
ErrorCode get_remote_handles (EntityHandle *local_vec, EntityHandle *rem_vec, int num_ents, int to_proc)
 
ErrorCode resolve_shared_ents (EntityHandle this_set, Range &proc_ents, int resolve_dim=-1, int shared_dim=-1, Range *skin_ents=NULL, const Tag *id_tag=0)
 Resolve shared entities between processors. More...
 
ErrorCode resolve_shared_ents (EntityHandle this_set, int resolve_dim=3, int shared_dim=-1, const Tag *id_tag=0)
 Resolve shared entities between processors. More...
 
ErrorCode resolve_shared_sets (EntityHandle this_set, const Tag *id_tag=0)
 
ErrorCode resolve_shared_sets (Range &candidate_sets, Tag id_tag)
 
ErrorCode augment_default_sets_with_ghosts (EntityHandle file_set)
 
ErrorCode get_pstatus (EntityHandle entity, unsigned char &pstatus_val)
 Get parallel status of an entity Returns the parallel status of an entity. More...
 
ErrorCode get_pstatus_entities (int dim, unsigned char pstatus_val, Range &pstatus_ents)
 Get entities with the given pstatus bit(s) set Returns any entities whose pstatus tag value v satisfies (v & pstatus_val) More...
 
ErrorCode get_owner (EntityHandle entity, int &owner)
 Return the rank of the entity owner. More...
 
ErrorCode get_owner_handle (EntityHandle entity, int &owner, EntityHandle &handle)
 Return the owner processor and handle of a given entity. More...
 
ErrorCode get_sharing_data (const EntityHandle entity, int *ps, EntityHandle *hs, unsigned char &pstat, unsigned int &num_ps)
 Get the shared processors/handles for an entity Get the shared processors/handles for an entity. Arrays must be large enough to receive data for all sharing procs. Does not include this proc if only shared with one other proc. More...
 
ErrorCode get_sharing_data (const EntityHandle entity, int *ps, EntityHandle *hs, unsigned char &pstat, int &num_ps)
 Get the shared processors/handles for an entity Same as other version but with int num_ps. More...
 
ErrorCode get_sharing_data (const EntityHandle *entities, int num_entities, std::set< int > &procs, int op=Interface::INTERSECT)
 Get the intersection or union of all sharing processors Get the intersection or union of all sharing processors. Processor set is cleared as part of this function. More...
 
ErrorCode get_sharing_data (const Range &entities, std::set< int > &procs, int op=Interface::INTERSECT)
 Get the intersection or union of all sharing processors Same as previous variant but with range as input. More...
 
ErrorCode get_shared_entities (int other_proc, Range &shared_ents, int dim=-1, const bool iface=false, const bool owned_filter=false)
 Get shared entities of specified dimension If other_proc is -1, any shared entities are returned. If dim is -1, entities of all dimensions on interface are returned. More...
 
ErrorCode get_interface_procs (std::set< unsigned int > &iface_procs, const bool get_buffs=false)
 get processors with which this processor shares an interface More...
 
ErrorCode get_comm_procs (std::set< unsigned int > &procs)
 get processors with which this processor communicates More...
 
ErrorCode get_entityset_procs (EntityHandle entity_set, std::vector< unsigned > &ranks) const
 Get array of process IDs sharing a set. Returns zero and passes back NULL if set is not shared. More...
 
ErrorCode get_entityset_owner (EntityHandle entity_set, unsigned &owner_rank, EntityHandle *remote_handle=0) const
 Get rank of the owner of a shared set. Returns this proc if set is not shared. Optionally returns handle on owning process for shared set. More...
 
ErrorCode get_entityset_local_handle (unsigned owning_rank, EntityHandle remote_handle, EntityHandle &local_handle) const
 Given set owner and handle on owner, find local set handle. More...
 
ErrorCode get_shared_sets (Range &result) const
 Get all shared sets. More...
 
ErrorCode get_entityset_owners (std::vector< unsigned > &ranks) const
 Get ranks of all processes that own at least one set that is shared with this process. Will include the rank of this process if this process owns any shared set. More...
 
ErrorCode get_owned_sets (unsigned owning_rank, Range &sets_out) const
 Get shared sets owned by process with specified rank. More...
 
const ProcConfigproc_config () const
 Get proc config for this communication object. More...
 
ProcConfigproc_config ()
 Get proc config for this communication object. More...
 
unsigned rank () const
 
unsigned size () const
 
MPI_Comm comm () const
 
ErrorCode get_shared_proc_tags (Tag &sharedp_tag, Tag &sharedps_tag, Tag &sharedh_tag, Tag &sharedhs_tag, Tag &pstatus_tag)
 return the tags used to indicate shared procs and handles More...
 
Rangepartition_sets ()
 return partition, interface set ranges More...
 
const Rangepartition_sets () const
 
Rangeinterface_sets ()
 
const Rangeinterface_sets () const
 
Tag sharedp_tag ()
 return sharedp tag More...
 
Tag sharedps_tag ()
 return sharedps tag More...
 
Tag sharedh_tag ()
 return sharedh tag More...
 
Tag sharedhs_tag ()
 return sharedhs tag More...
 
Tag pstatus_tag ()
 return pstatus tag More...
 
Tag partition_tag ()
 return partitions set tag More...
 
Tag part_tag ()
 
void print_pstatus (unsigned char pstat, std::string &ostr)
 print contents of pstatus value in human-readable form More...
 
void print_pstatus (unsigned char pstat)
 print contents of pstatus value in human-readable form to std::cut More...
 
ErrorCode get_part_entities (Range &ents, int dim=-1)
 return all the entities in parts owned locally More...
 
EntityHandle get_partitioning () const
 
ErrorCode set_partitioning (EntityHandle h)
 
ErrorCode get_global_part_count (int &count_out) const
 
ErrorCode get_part_owner (int part_id, int &owner_out) const
 
ErrorCode get_part_id (EntityHandle part, int &id_out) const
 
ErrorCode get_part_handle (int id, EntityHandle &handle_out) const
 
ErrorCode create_part (EntityHandle &part_out)
 
ErrorCode destroy_part (EntityHandle part)
 
ErrorCode collective_sync_partition ()
 
ErrorCode get_part_neighbor_ids (EntityHandle part, int neighbors_out[MAX_SHARING_PROCS], int &num_neighbors_out)
 
ErrorCode get_interface_sets (EntityHandle part, Range &iface_sets_out, int *adj_part_id=0)
 
ErrorCode get_owning_part (EntityHandle entity, int &owning_part_id_out, EntityHandle *owning_handle=0)
 
ErrorCode get_sharing_parts (EntityHandle entity, int part_ids_out[MAX_SHARING_PROCS], int &num_part_ids_out, EntityHandle remote_handles[MAX_SHARING_PROCS]=0)
 
ErrorCode filter_pstatus (Range &ents, const unsigned char pstatus_val, const unsigned char op, int to_proc=-1, Range *returned_ents=NULL)
 
ErrorCode get_iface_entities (int other_proc, int dim, Range &iface_ents)
 Get entities on interfaces shared with another proc. More...
 
Interfaceget_moab () const
 
ErrorCode clean_shared_tags (std::vector< Range * > &exchange_ents)
 
ErrorCode pack_buffer (Range &orig_ents, const bool adjacencies, const bool tags, const bool store_remote_handles, const int to_proc, Buffer *buff, TupleList *entprocs=NULL, Range *allsent=NULL)
 public 'cuz we want to unit test these externally More...
 
ErrorCode unpack_buffer (unsigned char *buff_ptr, const bool store_remote_handles, const int from_proc, const int ind, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< EntityHandle > &new_ents, const bool created_iface=false)
 
ErrorCode pack_entities (Range &entities, Buffer *buff, const bool store_remote_handles, const int to_proc, const bool is_iface, TupleList *entprocs=NULL, Range *allsent=NULL)
 
ErrorCode unpack_entities (unsigned char *&buff_ptr, const bool store_remote_handles, const int from_ind, const bool is_iface, std::vector< std::vector< EntityHandle > > &L1hloc, std::vector< std::vector< EntityHandle > > &L1hrem, std::vector< std::vector< int > > &L1p, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, std::vector< EntityHandle > &new_ents, const bool created_iface=false)
 unpack entities in buff_ptr More...
 
ErrorCode check_all_shared_handles (bool print_em=false)
 Call exchange_all_shared_handles, then compare the results with tag data on local shared entities. More...
 
ErrorCode pack_shared_handles (std::vector< std::vector< SharedEntityData > > &send_data)
 
ErrorCode check_local_shared ()
 
ErrorCode check_my_shared_handles (std::vector< std::vector< SharedEntityData > > &shents, const char *prefix=NULL)
 
void set_rank (unsigned int r)
 set rank for this pcomm; USED FOR TESTING ONLY! More...
 
void set_size (unsigned int r)
 set rank for this pcomm; USED FOR TESTING ONLY! More...
 
int get_buffers (int to_proc, bool *is_new=NULL)
 get (and possibly allocate) buffers for messages to/from to_proc; returns index of to_proc in buffProcs vector; if is_new is non-NULL, sets to whether new buffer was allocated PUBLIC ONLY FOR TESTING! More...
 
const std::vector< unsigned int > & buff_procs () const
 get buff processor vector More...
 
ErrorCode unpack_remote_handles (unsigned int from_proc, unsigned char *&buff_ptr, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p)
 
ErrorCode pack_remote_handles (std::vector< EntityHandle > &L1hloc, std::vector< EntityHandle > &L1hrem, std::vector< int > &procs, unsigned int to_proc, Buffer *buff)
 
ErrorCode create_interface_sets (std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs)
 
ErrorCode create_interface_sets (EntityHandle this_set, int resolve_dim, int shared_dim)
 
ErrorCode tag_shared_verts (TupleList &shared_ents, std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs, Range &proc_verts, unsigned int i_extra=1)
 
ErrorCode list_entities (const EntityHandle *ents, int num_ents)
 
ErrorCode list_entities (const Range &ents)
 
void set_send_request (int n_request)
 
void set_recv_request (int n_request)
 
void reset_all_buffers ()
 reset message buffers to their initial state More...
 
void set_debug_verbosity (int verb)
 set the verbosity level of output from this pcomm More...
 
int get_debug_verbosity ()
 get the verbosity level of output from this pcomm More...
 
ErrorCode gather_data (Range &gather_ents, Tag &tag_handle, Tag id_tag=0, EntityHandle gather_set=0, int root_proc_rank=0)
 
ErrorCode settle_intersection_points (Range &edges, Range &shared_edges_owned, std::vector< std::vector< EntityHandle > * > &extraNodesVec, double tolerance)
 
ErrorCode delete_entities (Range &to_delete)
 
ErrorCode correct_thin_ghost_layers ()
 

Static Public Member Functions

static ParallelCommget_pcomm (Interface *impl, const int index)
 get the indexed pcomm object from the interface More...
 
static ParallelCommget_pcomm (Interface *impl, EntityHandle partitioning, const MPI_Comm *comm=0)
 Get ParallelComm instance associated with partition handle Will create ParallelComm instance if a) one does not already exist and b) a valid value for MPI_Comm is passed. More...
 
static ErrorCode get_all_pcomm (Interface *impl, std::vector< ParallelComm * > &list)
 
static ErrorCode exchange_ghost_cells (ParallelComm **pc, unsigned int num_procs, int ghost_dim, int bridge_dim, int num_layers, int addl_ents, bool store_remote_handles, EntityHandle *file_sets=NULL)
 Static version of exchange_ghost_cells, exchanging info through buffers rather than messages. More...
 
static ErrorCode resolve_shared_ents (ParallelComm **pc, const unsigned int np, EntityHandle this_set, const int to_dim)
 
static Tag pcomm_tag (Interface *impl, bool create_if_missing=true)
 return pcomm tag; static because might not have a pcomm before going to look for one on the interface More...
 
static ErrorCode check_all_shared_handles (ParallelComm **pcs, int num_pcs)
 

Static Public Attributes

static unsigned char PROC_SHARED
 
static unsigned char PROC_OWNER
 
static const unsigned int INITIAL_BUFF_SIZE = 1024
 

Private Member Functions

ErrorCode reduce_void (int tag_data_type, const MPI_Op mpi_op, int num_ents, void *old_vals, void *new_vals)
 
template<class T >
ErrorCode reduce (const MPI_Op mpi_op, int num_ents, void *old_vals, void *new_vals)
 
void print_debug_isend (int from, int to, unsigned char *buff, int tag, int size)
 
void print_debug_irecv (int to, int from, unsigned char *buff, int size, int tag, int incoming)
 
void print_debug_recd (MPI_Status status)
 
void print_debug_waitany (std::vector< MPI_Request > &reqs, int tag, int proc)
 
void initialize ()
 
ErrorCode set_sharing_data (EntityHandle ent, unsigned char pstatus, int old_nump, int new_nump, int *ps, EntityHandle *hs)
 
ErrorCode check_clean_iface (Range &allsent)
 
void define_mpe ()
 
ErrorCode get_sent_ents (const bool is_iface, const int bridge_dim, const int ghost_dim, const int num_layers, const int addl_ents, Range *sent_ents, Range &allsent, TupleList &entprocs)
 
ErrorCode set_pstatus_entities (Range &pstatus_ents, unsigned char pstatus_val, bool lower_dim_ents=false, bool verts_too=true, int operation=Interface::UNION)
 Set pstatus values on entities. More...
 
ErrorCode set_pstatus_entities (EntityHandle *pstatus_ents, int num_ents, unsigned char pstatus_val, bool lower_dim_ents=false, bool verts_too=true, int operation=Interface::UNION)
 Set pstatus values on entities (vector-based function) More...
 
int estimate_ents_buffer_size (Range &entities, const bool store_remote_handles)
 estimate size required to pack entities More...
 
int estimate_sets_buffer_size (Range &entities, const bool store_remote_handles)
 estimate size required to pack sets More...
 
ErrorCode send_buffer (const unsigned int to_proc, Buffer *send_buff, const int msg_tag, MPI_Request &send_req, MPI_Request &ack_recv_req, int *ack_buff, int &this_incoming, int next_mesg_tag=-1, Buffer *next_recv_buff=NULL, MPI_Request *next_recv_req=NULL, int *next_incoming=NULL)
 send the indicated buffer, possibly sending size first More...
 
ErrorCode recv_buffer (int mesg_tag_expected, const MPI_Status &mpi_status, Buffer *recv_buff, MPI_Request &recv_2nd_req, MPI_Request &ack_req, int &this_incoming, Buffer *send_buff, MPI_Request &send_req, MPI_Request &sent_ack_req, bool &done, Buffer *next_buff=NULL, int next_tag=-1, MPI_Request *next_req=NULL, int *next_incoming=NULL)
 process incoming message; if longer than the initial size, post recv for next part then send ack; if ack, send second part; else indicate that we're done and buffer is ready for processing More...
 
ErrorCode pack_entity_seq (const int nodes_per_entity, const bool store_remote_handles, const int to_proc, Range &these_ents, std::vector< EntityHandle > &entities, Buffer *buff)
 pack a range of entities with equal # verts per entity, along with the range on the sending proc More...
 
ErrorCode print_buffer (unsigned char *buff_ptr, int mesg_type, int from_proc, bool sent)
 
ErrorCode unpack_iface_entities (unsigned char *&buff_ptr, const int from_proc, const int ind, std::vector< EntityHandle > &recd_ents)
 for all the entities in the received buffer; for each, save entities in this instance which match connectivity, or zero if none found More...
 
ErrorCode pack_sets (Range &entities, Buffer *buff, const bool store_handles, const int to_proc)
 
ErrorCode unpack_sets (unsigned char *&buff_ptr, std::vector< EntityHandle > &entities, const bool store_handles, const int to_proc)
 
ErrorCode pack_adjacencies (Range &entities, Range::const_iterator &start_rit, Range &whole_range, unsigned char *&buff_ptr, int &count, const bool just_count, const bool store_handles, const int to_proc)
 
ErrorCode unpack_adjacencies (unsigned char *&buff_ptr, Range &entities, const bool store_handles, const int from_proc)
 
ErrorCode unpack_remote_handles (unsigned int from_proc, const unsigned char *buff_ptr, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p)
 
ErrorCode find_existing_entity (const bool is_iface, const int owner_p, const EntityHandle owner_h, const int num_ents, const EntityHandle *connect, const int num_connect, const EntityType this_type, std::vector< EntityHandle > &L2hloc, std::vector< EntityHandle > &L2hrem, std::vector< unsigned int > &L2p, EntityHandle &new_h)
 given connectivity and type, find an existing entity, if there is one More...
 
ErrorCode build_sharedhps_list (const EntityHandle entity, const unsigned char pstatus, const int sharedp, const std::set< unsigned int > &procs, unsigned int &num_ents, int *tmp_procs, EntityHandle *tmp_handles)
 
ErrorCode get_tag_send_list (const Range &all_entities, std::vector< Tag > &all_tags, std::vector< Range > &tag_ranges)
 Get list of tags for which to exchange data. More...
 
ErrorCode pack_tags (Range &entities, const std::vector< Tag > &src_tags, const std::vector< Tag > &dst_tags, const std::vector< Range > &tag_ranges, Buffer *buff, const bool store_handles, const int to_proc)
 Serialize entity tag data. More...
 
ErrorCode packed_tag_size (Tag source_tag, const Range &entities, int &count_out)
 Calculate buffer size required to pack tag data. More...
 
ErrorCode pack_tag (Tag source_tag, Tag destination_tag, const Range &entities, const std::vector< EntityHandle > &whole_range, Buffer *buff, const bool store_remote_handles, const int to_proc)
 Serialize tag data. More...
 
ErrorCode unpack_tags (unsigned char *&buff_ptr, std::vector< EntityHandle > &entities, const bool store_handles, const int to_proc, const MPI_Op *const mpi_op=NULL)
 
ErrorCode tag_shared_verts (TupleList &shared_verts, Range *skin_ents, std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs, Range &proc_verts)
 
ErrorCode get_proc_nvecs (int resolve_dim, int shared_dim, Range *skin_ents, std::map< std::vector< int >, std::vector< EntityHandle > > &proc_nvecs)
 
ErrorCode create_iface_pc_links ()
 
ErrorCode pack_range_map (Range &this_range, EntityHandle actual_start, HandleMap &handle_map)
 pack a range map with keys in this_range and values a contiguous series of handles starting at actual_start More...
 
bool is_iface_proc (EntityHandle this_set, int to_proc)
 returns true if the set is an interface shared with to_proc More...
 
ErrorCode update_iface_sets (Range &sent_ents, std::vector< EntityHandle > &remote_handles, int from_proc)
 for any remote_handles set to zero, remove corresponding sent_ents from iface_sets corresponding to from_proc More...
 
ErrorCode get_ghosted_entities (int bridge_dim, int ghost_dim, int to_proc, int num_layers, int addl_ents, Range &ghosted_ents)
 for specified bridge/ghost dimension, to_proc, and number of layers, get the entities to be ghosted, and info on additional procs needing to communicate with to_proc More...
 
ErrorCode add_verts (Range &sent_ents)
 add vertices adjacent to entities in this list More...
 
ErrorCode exchange_all_shared_handles (std::vector< std::vector< SharedEntityData > > &send_data, std::vector< std::vector< SharedEntityData > > &result)
 Every processor sends shared entity handle data to every other processor that it shares entities with. Passed back map is all received data, indexed by processor ID. This function is intended to be used for debugging. More...
 
ErrorCode get_remote_handles (const bool store_remote_handles, EntityHandle *from_vec, EntityHandle *to_vec_tmp, int num_ents, int to_proc, const std::vector< EntityHandle > &new_ents)
 replace handles in from_vec with corresponding handles on to_proc (by checking shared[p/h]_tag and shared[p/h]s_tag; if no remote handle and new_ents is non-null, substitute instead CREATE_HANDLE(MBMAXTYPE, index) where index is handle's position in new_ents More...
 
ErrorCode get_remote_handles (const bool store_remote_handles, const Range &from_range, Range &to_range, int to_proc, const std::vector< EntityHandle > &new_ents)
 same as other version, except from_range and to_range should be different here More...
 
ErrorCode get_remote_handles (const bool store_remote_handles, const Range &from_range, EntityHandle *to_vec, int to_proc, const std::vector< EntityHandle > &new_ents)
 same as other version, except packs range into vector More...
 
ErrorCode get_local_handles (EntityHandle *from_vec, int num_ents, const Range &new_ents)
 goes through from_vec, and for any with type MBMAXTYPE, replaces with new_ents value at index corresponding to id of entity in from_vec More...
 
ErrorCode get_local_handles (const Range &remote_handles, Range &local_handles, const std::vector< EntityHandle > &new_ents)
 same as above except puts results in range More...
 
ErrorCode get_local_handles (EntityHandle *from_vec, int num_ents, const std::vector< EntityHandle > &new_ents)
 same as above except gets new_ents from vector More...
 
ErrorCode update_remote_data (Range &local_range, Range &remote_range, int other_proc, const unsigned char add_pstat)
 
ErrorCode update_remote_data (const EntityHandle new_h, const int *ps, const EntityHandle *hs, const int num_ps, const unsigned char add_pstat)
 
ErrorCode update_remote_data_old (const EntityHandle new_h, const int *ps, const EntityHandle *hs, const int num_ps, const unsigned char add_pstat)
 
ErrorCode tag_iface_entities ()
 Set pstatus tag interface bit on entities in sets passed in. More...
 
int add_pcomm (ParallelComm *pc)
 add a pc to the iface instance tag PARALLEL_COMM More...
 
void remove_pcomm (ParallelComm *pc)
 remove a pc from the iface instance tag PARALLEL_COMM More...
 
ErrorCode check_sent_ents (Range &allsent)
 check entities to make sure there are no zero-valued remote handles where they shouldn't be More...
 
ErrorCode assign_entities_part (std::vector< EntityHandle > &entities, const int proc)
 assign entities to the input processor part More...
 
ErrorCode remove_entities_part (Range &entities, const int proc)
 remove entities to the input processor part More...
 
void delete_all_buffers ()
 reset message buffers to their initial state More...
 

Private Attributes

InterfacembImpl
 MB interface associated with this writer. More...
 
ProcConfig procConfig
 Proc config object, keeps info on parallel stuff. More...
 
SequenceManagersequenceManager
 Sequence manager, to get more efficient access to entities. More...
 
ErrorerrorHandler
 Error handler. More...
 
std::vector< Buffer * > localOwnedBuffs
 more data buffers, proc-specific More...
 
std::vector< Buffer * > remoteOwnedBuffs
 
std::vector< MPI_Request > sendReqs
 request objects, may be used if store_remote_handles is used More...
 
std::vector< MPI_Request > recvReqs
 receive request objects More...
 
std::vector< MPI_Request > recvRemotehReqs
 
std::vector< unsigned int > buffProcs
 processor rank for each buffer index More...
 
Range partitionSets
 the partition, interface sets for this comm'n instance More...
 
Range interfaceSets
 
std::set< EntityHandlesharedEnts
 all local entities shared with others, whether ghost or ghosted More...
 
Tag sharedpTag
 tags used to save sharing procs and handles More...
 
Tag sharedpsTag
 
Tag sharedhTag
 
Tag sharedhsTag
 
Tag pstatusTag
 
Tag ifaceSetsTag
 
Tag partitionTag
 
int globalPartCount
 Cache of global part count. More...
 
EntityHandle partitioningSet
 entity set containing all parts More...
 
std::ofstream myFile
 
int pcommID
 
int ackbuff
 
DebugOutputmyDebug
 used to set verbosity level and to report output More...
 
SharedSetDatasharedSetData
 Data about shared sets. More...
 

Friends

class ParallelMergeMesh
 

Detailed Description

Parallel communications in MOAB.

Author
Tim Tautges

This class implements methods to communicate mesh between processors

Examples
ComputeTriDual.cpp, and LaplacianSmoother.cpp.

Definition at line 54 of file ParallelComm.hpp.

Constructor & Destructor Documentation

◆ ParallelComm() [1/2]

moab::ParallelComm::ParallelComm ( Interface impl,
MPI_Comm  comm,
int *  pcomm_id_out = 0 
)

constructor

Definition at line 313 of file ParallelComm.cpp.

314  : mbImpl( impl ), procConfig( cm ), sharedpTag( 0 ), sharedpsTag( 0 ), sharedhTag( 0 ), sharedhsTag( 0 ),
316  myDebug( NULL )
317 {
318  initialize();
319  sharedSetData = new SharedSetData( *impl, pcommID, procConfig.proc_rank() );
320  if( id ) *id = pcommID;
321 }

References initialize(), pcommID, moab::ProcConfig::proc_rank(), procConfig, and sharedSetData.

Referenced by get_pcomm().

◆ ParallelComm() [2/2]

moab::ParallelComm::ParallelComm ( Interface impl,
std::vector< unsigned char > &  tmp_buff,
MPI_Comm  comm,
int *  pcomm_id_out = 0 
)

constructor taking packed buffer, for testing

Definition at line 323 of file ParallelComm.cpp.

324  : mbImpl( impl ), procConfig( cm ), sharedpTag( 0 ), sharedpsTag( 0 ), sharedhTag( 0 ), sharedhsTag( 0 ),
326  myDebug( NULL )
327 {
328  initialize();
329  sharedSetData = new SharedSetData( *impl, pcommID, procConfig.proc_rank() );
330  if( id ) *id = pcommID;
331 }

References initialize(), pcommID, moab::ProcConfig::proc_rank(), procConfig, and sharedSetData.

◆ ~ParallelComm()

moab::ParallelComm::~ParallelComm ( )

destructor

Definition at line 333 of file ParallelComm.cpp.

334 {
335  remove_pcomm( this );
337  delete myDebug;
338  delete sharedSetData;
339 }

References delete_all_buffers(), myDebug, remove_pcomm(), and sharedSetData.

Member Function Documentation

◆ add_pcomm()

int moab::ParallelComm::add_pcomm ( ParallelComm pc)
private

add a pc to the iface instance tag PARALLEL_COMM

Definition at line 374 of file ParallelComm.cpp.

375 {
376  // Add this pcomm to instance tag
377  std::vector< ParallelComm* > pc_array( MAX_SHARING_PROCS, (ParallelComm*)NULL );
378  Tag pc_tag = pcomm_tag( mbImpl, true );
379  assert( 0 != pc_tag );
380 
381  const EntityHandle root = 0;
382  ErrorCode result = mbImpl->tag_get_data( pc_tag, &root, 1, (void*)&pc_array[0] );
383  if( MB_SUCCESS != result && MB_TAG_NOT_FOUND != result ) return -1;
384  int index = 0;
385  while( index < MAX_SHARING_PROCS && pc_array[index] )
386  index++;
387  if( index == MAX_SHARING_PROCS )
388  {
389  index = -1;
390  assert( false );
391  }
392  else
393  {
394  pc_array[index] = pc;
395  mbImpl->tag_set_data( pc_tag, &root, 1, (void*)&pc_array[0] );
396  }
397  return index;
398 }

References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, MB_TAG_NOT_FOUND, mbImpl, pcomm_tag(), moab::Interface::tag_get_data(), and moab::Interface::tag_set_data().

Referenced by initialize().

◆ add_verts()

ErrorCode moab::ParallelComm::add_verts ( Range sent_ents)
private

add vertices adjacent to entities in this list

Definition at line 7502 of file ParallelComm.cpp.

7503 {
7504  // Get the verts adj to these entities, since we'll have to send those too
7505 
7506  // First check sets
7507  std::pair< Range::const_iterator, Range::const_iterator > set_range = sent_ents.equal_range( MBENTITYSET );
7508  ErrorCode result = MB_SUCCESS, tmp_result;
7509  for( Range::const_iterator rit = set_range.first; rit != set_range.second; ++rit )
7510  {
7511  tmp_result = mbImpl->get_entities_by_type( *rit, MBVERTEX, sent_ents );MB_CHK_SET_ERR( tmp_result, "Failed to get contained verts" );
7512  }
7513 
7514  // Now non-sets
7515  Range tmp_ents;
7516  std::copy( sent_ents.begin(), set_range.first, range_inserter( tmp_ents ) );
7517  result = mbImpl->get_adjacencies( tmp_ents, 0, false, sent_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get vertices adj to ghosted ents" );
7518 
7519  // if polyhedra, need to add all faces from there
7520  Range polyhedra = sent_ents.subset_by_type( MBPOLYHEDRON );
7521  // get all faces adjacent to every polyhedra
7522  result = mbImpl->get_connectivity( polyhedra, sent_ents );MB_CHK_SET_ERR( result, "Failed to get polyhedra faces" );
7523  return result;
7524 }

References moab::Range::begin(), moab::Range::equal_range(), ErrorCode, moab::Interface::get_adjacencies(), moab::Interface::get_connectivity(), moab::Interface::get_entities_by_type(), MB_CHK_SET_ERR, MB_SUCCESS, MBENTITYSET, mbImpl, MBPOLYHEDRON, MBVERTEX, moab::Range::subset_by_type(), and moab::Interface::UNION.

Referenced by broadcast_entities(), exchange_owned_mesh(), get_ghosted_entities(), scatter_entities(), and send_entities().

◆ assign_entities_part()

ErrorCode moab::ParallelComm::assign_entities_part ( std::vector< EntityHandle > &  entities,
const int  proc 
)
private

assign entities to the input processor part

Definition at line 7297 of file ParallelComm.cpp.

7298 {
7299  EntityHandle part_set;
7300  ErrorCode result = get_part_handle( proc, part_set );MB_CHK_SET_ERR( result, "Failed to get part handle" );
7301 
7302  if( part_set > 0 )
7303  {
7304  result = mbImpl->add_entities( part_set, &entities[0], entities.size() );MB_CHK_SET_ERR( result, "Failed to add entities to part set" );
7305  }
7306 
7307  return MB_SUCCESS;
7308 }

References moab::Interface::add_entities(), entities, ErrorCode, get_part_handle(), MB_CHK_SET_ERR, MB_SUCCESS, and mbImpl.

Referenced by exchange_owned_mesh(), and recv_entities().

◆ assign_global_ids() [1/2]

ErrorCode moab::ParallelComm::assign_global_ids ( EntityHandle  this_set,
const int  dimension,
const int  start_id = 1,
const bool  largest_dim_only = true,
const bool  parallel = true,
const bool  owned_only = false 
)

assign a global id space, for largest-dimension or all entities (and in either case for vertices too)

Assign a global id space, for largest-dimension or all entities (and in either case for vertices too)

Parameters
owned_onlyIf true, do not get global IDs for non-owned entities from remote processors.
Examples
ComputeTriDual.cpp.

Definition at line 421 of file ParallelComm.cpp.

427 {
428  Range entities[4];
429  ErrorCode result;
430  std::vector< unsigned char > pstatus;
431  for( int dim = 0; dim <= dimension; dim++ )
432  {
433  if( dim == 0 || !largest_dim_only || dim == dimension )
434  {
435  result = mbImpl->get_entities_by_dimension( this_set, dim, entities[dim] );MB_CHK_SET_ERR( result, "Failed to get vertices in assign_global_ids" );
436  }
437 
438  // Need to filter out non-locally-owned entities!!!
439  pstatus.resize( entities[dim].size() );
440  result = mbImpl->tag_get_data( pstatus_tag(), entities[dim], &pstatus[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus in assign_global_ids" );
441 
442  Range dum_range;
443  Range::iterator rit;
444  unsigned int i;
445  for( rit = entities[dim].begin(), i = 0; rit != entities[dim].end(); ++rit, i++ )
446  if( pstatus[i] & PSTATUS_NOT_OWNED ) dum_range.insert( *rit );
447  entities[dim] = subtract( entities[dim], dum_range );
448  }
449 
450  return assign_global_ids( entities, dimension, start_id, parallel, owned_only );
451 }

References dim, entities, ErrorCode, moab::Interface::get_entities_by_dimension(), moab::Range::insert(), MB_CHK_SET_ERR, mbImpl, PSTATUS_NOT_OWNED, pstatus_tag(), size(), moab::subtract(), and moab::Interface::tag_get_data().

Referenced by check_global_ids(), compute_dual_mesh(), create_fine_mesh(), moab::NCHelperDomain::create_mesh(), moab::NCHelperScrip::create_mesh(), main(), and resolve_shared_ents().

◆ assign_global_ids() [2/2]

ErrorCode moab::ParallelComm::assign_global_ids ( Range  entities[],
const int  dimension,
const int  start_id,
const bool  parallel,
const bool  owned_only 
)

assign a global id space, for largest-dimension or all entities (and in either case for vertices too)

Assign a global id space, for largest-dimension or all entities (and in either case for vertices too)

Definition at line 455 of file ParallelComm.cpp.

460 {
461  int local_num_elements[4];
462  ErrorCode result;
463  for( int dim = 0; dim <= dimension; dim++ )
464  {
465  local_num_elements[dim] = entities[dim].size();
466  }
467 
468  // Communicate numbers
469  std::vector< int > num_elements( procConfig.proc_size() * 4 );
470 #ifdef MOAB_HAVE_MPI
471  if( procConfig.proc_size() > 1 && parallel )
472  {
473  int retval =
474  MPI_Allgather( local_num_elements, 4, MPI_INT, &num_elements[0], 4, MPI_INT, procConfig.proc_comm() );
475  if( 0 != retval ) return MB_FAILURE;
476  }
477  else
478 #endif
479  for( int dim = 0; dim < 4; dim++ )
480  num_elements[dim] = local_num_elements[dim];
481 
482  // My entities start at one greater than total_elems[d]
483  int total_elems[4] = { start_id, start_id, start_id, start_id };
484 
485  for( unsigned int proc = 0; proc < procConfig.proc_rank(); proc++ )
486  {
487  for( int dim = 0; dim < 4; dim++ )
488  total_elems[dim] += num_elements[4 * proc + dim];
489  }
490 
491  // Assign global ids now
492  Tag gid_tag = mbImpl->globalId_tag();
493 
494  for( int dim = 0; dim < 4; dim++ )
495  {
496  if( entities[dim].empty() ) continue;
497  num_elements.resize( entities[dim].size() );
498  int i = 0;
499  for( Range::iterator rit = entities[dim].begin(); rit != entities[dim].end(); ++rit )
500  num_elements[i++] = total_elems[dim]++;
501 
502  result = mbImpl->tag_set_data( gid_tag, entities[dim], &num_elements[0] );MB_CHK_SET_ERR( result, "Failed to set global id tag in assign_global_ids" );
503  }
504 
505  if( owned_only ) return MB_SUCCESS;
506 
507  // Exchange tags
508  for( int dim = 1; dim < 4; dim++ )
509  entities[0].merge( entities[dim] );
510 
511  return exchange_tags( gid_tag, entities[0] );
512 }

References dim, entities, ErrorCode, exchange_tags(), moab::Interface::globalId_tag(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), moab::ProcConfig::proc_size(), procConfig, size(), and moab::Interface::tag_set_data().

◆ augment_default_sets_with_ghosts()

ErrorCode moab::ParallelComm::augment_default_sets_with_ghosts ( EntityHandle  file_set)

extend shared sets with ghost entities After ghosting, ghost entities do not have yet information about the material set, partition set, Neumann or Dirichlet set they could belong to This method will assign ghosted entities to the those special entity sets In some case we might even have to create those sets, if they do not exist yet on the local processor

The special entity sets all have an unique identifier, in a form of an integer tag to the set. The shared sets data is not used, because we do not use the geometry sets, as they are not uniquely identified

Parameters
file_set: file set used per application

Definition at line 4766 of file ParallelComm.cpp.

4767 {
4768  // gather all default sets we are interested in, material, neumann, etc
4769  // we will skip geometry sets, because they are not uniquely identified with their tag value
4770  // maybe we will add another tag, like category
4771 
4772  if( procConfig.proc_size() < 2 ) return MB_SUCCESS; // no reason to stop by
4773  const char* const shared_set_tag_names[] = { MATERIAL_SET_TAG_NAME, DIRICHLET_SET_TAG_NAME, NEUMANN_SET_TAG_NAME,
4775 
4776  int num_tags = sizeof( shared_set_tag_names ) / sizeof( shared_set_tag_names[0] );
4777 
4778  Range* rangeSets = new Range[num_tags];
4779  Tag* tags = new Tag[num_tags + 1]; // one extra for global id tag, which is an int, so far
4780 
4781  int my_rank = rank();
4782  int** tagVals = new int*[num_tags];
4783  for( int i = 0; i < num_tags; i++ )
4784  tagVals[i] = NULL;
4785  ErrorCode rval;
4786 
4787  // for each tag, we keep a local map, from the value to the actual set with that value
4788  // we assume that the tag values are unique, for a given set, otherwise we
4789  // do not know to which set to add the entity
4790 
4791  typedef std::map< int, EntityHandle > MVal;
4792  typedef std::map< int, EntityHandle >::iterator itMVal;
4793  MVal* localMaps = new MVal[num_tags];
4794 
4795  for( int i = 0; i < num_tags; i++ )
4796  {
4797 
4798  rval = mbImpl->tag_get_handle( shared_set_tag_names[i], 1, MB_TYPE_INTEGER, tags[i], MB_TAG_ANY );
4799  if( MB_SUCCESS != rval ) continue;
4800  rval = mbImpl->get_entities_by_type_and_tag( file_set, MBENTITYSET, &( tags[i] ), 0, 1, rangeSets[i],
4801  Interface::UNION );MB_CHK_SET_ERR( rval, "can't get sets with a tag" );
4802 
4803  if( rangeSets[i].size() > 0 )
4804  {
4805  tagVals[i] = new int[rangeSets[i].size()];
4806  // fill up with the tag values
4807  rval = mbImpl->tag_get_data( tags[i], rangeSets[i], tagVals[i] );MB_CHK_SET_ERR( rval, "can't get set tag values" );
4808  // now for inverse mapping:
4809  for( int j = 0; j < (int)rangeSets[i].size(); j++ )
4810  {
4811  localMaps[i][tagVals[i][j]] = rangeSets[i][j];
4812  }
4813  }
4814  }
4815  // get the global id tag too
4816  tags[num_tags] = mbImpl->globalId_tag();
4817 
4818  TupleList remoteEnts;
4819  // processor to send to, type of tag (0-mat,) tag value, remote handle
4820  // 1-diri
4821  // 2-neum
4822  // 3-part
4823  //
4824  int initialSize = (int)sharedEnts.size(); // estimate that on average, each shared ent
4825  // will be sent to one processor, for one tag
4826  // we will actually send only entities that are owned locally, and from those
4827  // only those that do have a special tag (material, neumann, etc)
4828  // if we exceed the capacity, we resize the tuple
4829  remoteEnts.initialize( 3, 0, 1, 0, initialSize );
4830  remoteEnts.enableWriteAccess();
4831 
4832  // now, for each owned entity, get the remote handle(s) and Proc(s), and verify if it
4833  // belongs to one of the sets; if yes, create a tuple and append it
4834 
4835  std::set< EntityHandle > own_and_sha;
4836  int ir = 0, jr = 0;
4837  for( std::set< EntityHandle >::iterator vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit )
4838  {
4839  // ghosted eh
4840  EntityHandle geh = *vit;
4841  if( own_and_sha.find( geh ) != own_and_sha.end() ) // already encountered
4842  continue;
4843  int procs[MAX_SHARING_PROCS];
4845  int nprocs;
4846  unsigned char pstat;
4847  rval = get_sharing_data( geh, procs, handles, pstat, nprocs );
4848  if( rval != MB_SUCCESS )
4849  {
4850  for( int i = 0; i < num_tags; i++ )
4851  delete[] tagVals[i];
4852  delete[] tagVals;
4853 
4854  MB_CHK_SET_ERR( rval, "Failed to get sharing data" );
4855  }
4856  if( pstat & PSTATUS_NOT_OWNED ) continue; // we will send info only for entities that we own
4857  own_and_sha.insert( geh );
4858  for( int i = 0; i < num_tags; i++ )
4859  {
4860  for( int j = 0; j < (int)rangeSets[i].size(); j++ )
4861  {
4862  EntityHandle specialSet = rangeSets[i][j]; // this set has tag i, value tagVals[i][j];
4863  if( mbImpl->contains_entities( specialSet, &geh, 1 ) )
4864  {
4865  // this ghosted entity is in a special set, so form the tuple
4866  // to send to the processors that do not own this
4867  for( int k = 0; k < nprocs; k++ )
4868  {
4869  if( procs[k] != my_rank )
4870  {
4871  if( remoteEnts.get_n() >= remoteEnts.get_max() - 1 )
4872  {
4873  // resize, so we do not overflow
4874  int oldSize = remoteEnts.get_max();
4875  // increase with 50% the capacity
4876  remoteEnts.resize( oldSize + oldSize / 2 + 1 );
4877  }
4878  remoteEnts.vi_wr[ir++] = procs[k]; // send to proc
4879  remoteEnts.vi_wr[ir++] = i; // for the tags [i] (0-3)
4880  remoteEnts.vi_wr[ir++] = tagVals[i][j]; // actual value of the tag
4881  remoteEnts.vul_wr[jr++] = handles[k];
4882  remoteEnts.inc_n();
4883  }
4884  }
4885  }
4886  }
4887  }
4888  // if the local entity has a global id, send it too, so we avoid
4889  // another "exchange_tags" for global id
4890  int gid;
4891  rval = mbImpl->tag_get_data( tags[num_tags], &geh, 1, &gid );MB_CHK_SET_ERR( rval, "Failed to get global id" );
4892  if( gid != 0 )
4893  {
4894  for( int k = 0; k < nprocs; k++ )
4895  {
4896  if( procs[k] != my_rank )
4897  {
4898  if( remoteEnts.get_n() >= remoteEnts.get_max() - 1 )
4899  {
4900  // resize, so we do not overflow
4901  int oldSize = remoteEnts.get_max();
4902  // increase with 50% the capacity
4903  remoteEnts.resize( oldSize + oldSize / 2 + 1 );
4904  }
4905  remoteEnts.vi_wr[ir++] = procs[k]; // send to proc
4906  remoteEnts.vi_wr[ir++] = num_tags; // for the tags [j] (4)
4907  remoteEnts.vi_wr[ir++] = gid; // actual value of the tag
4908  remoteEnts.vul_wr[jr++] = handles[k];
4909  remoteEnts.inc_n();
4910  }
4911  }
4912  }
4913  }
4914 
4915 #ifndef NDEBUG
4916  if( my_rank == 1 && 1 == get_debug_verbosity() ) remoteEnts.print( " on rank 1, before augment routing" );
4917  MPI_Barrier( procConfig.proc_comm() );
4918  int sentEnts = remoteEnts.get_n();
4919  assert( ( sentEnts == jr ) && ( 3 * sentEnts == ir ) );
4920 #endif
4921  // exchange the info now, and send to
4922  gs_data::crystal_data* cd = this->procConfig.crystal_router();
4923  // All communication happens here; no other mpi calls
4924  // Also, this is a collective call
4925  rval = cd->gs_transfer( 1, remoteEnts, 0 );MB_CHK_SET_ERR( rval, "Error in tuple transfer" );
4926 #ifndef NDEBUG
4927  if( my_rank == 0 && 1 == get_debug_verbosity() ) remoteEnts.print( " on rank 0, after augment routing" );
4928  MPI_Barrier( procConfig.proc_comm() );
4929 #endif
4930 
4931  // now process the data received from other processors
4932  int received = remoteEnts.get_n();
4933  for( int i = 0; i < received; i++ )
4934  {
4935  // int from = ents_to_delete.vi_rd[i];
4936  EntityHandle geh = (EntityHandle)remoteEnts.vul_rd[i];
4937  int from_proc = remoteEnts.vi_rd[3 * i];
4938  if( my_rank == from_proc )
4939  std::cout << " unexpected receive from my rank " << my_rank << " during augmenting with ghosts\n ";
4940  int tag_type = remoteEnts.vi_rd[3 * i + 1];
4941  assert( ( 0 <= tag_type ) && ( tag_type <= num_tags ) );
4942  int value = remoteEnts.vi_rd[3 * i + 2];
4943  if( tag_type == num_tags )
4944  {
4945  // it is global id
4946  rval = mbImpl->tag_set_data( tags[num_tags], &geh, 1, &value );MB_CHK_SET_ERR( rval, "Error in setting gid tag" );
4947  }
4948  else
4949  {
4950  // now, based on value and tag type, see if we have that value in the map
4951  MVal& lmap = localMaps[tag_type];
4952  itMVal itm = lmap.find( value );
4953  if( itm == lmap.end() )
4954  {
4955  // the value was not found yet in the local map, so we have to create the set
4956  EntityHandle newSet;
4957  rval = mbImpl->create_meshset( MESHSET_SET, newSet );MB_CHK_SET_ERR( rval, "can't create new set" );
4958  lmap[value] = newSet;
4959  // set the tag value
4960  rval = mbImpl->tag_set_data( tags[tag_type], &newSet, 1, &value );MB_CHK_SET_ERR( rval, "can't set tag for new set" );
4961 
4962  // we also need to add the new created set to the file set, if not null
4963  if( file_set )
4964  {
4965  rval = mbImpl->add_entities( file_set, &newSet, 1 );MB_CHK_SET_ERR( rval, "can't add new set to the file set" );
4966  }
4967  }
4968  // add the entity to the set pointed to by the map
4969  rval = mbImpl->add_entities( lmap[value], &geh, 1 );MB_CHK_SET_ERR( rval, "can't add ghost ent to the set" );
4970  }
4971  }
4972 
4973  for( int i = 0; i < num_tags; i++ )
4974  delete[] tagVals[i];
4975  delete[] tagVals;
4976  delete[] rangeSets;
4977  delete[] tags;
4978  delete[] localMaps;
4979  return MB_SUCCESS;
4980 }

References moab::Interface::add_entities(), moab::Interface::contains_entities(), moab::Interface::create_meshset(), moab::ProcConfig::crystal_router(), DIRICHLET_SET_TAG_NAME, moab::TupleList::enableWriteAccess(), ErrorCode, get_debug_verbosity(), moab::Interface::get_entities_by_type_and_tag(), moab::TupleList::get_max(), moab::TupleList::get_n(), get_sharing_data(), moab::Interface::globalId_tag(), moab::TupleList::inc_n(), moab::TupleList::initialize(), MATERIAL_SET_TAG_NAME, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_ANY, MB_TYPE_INTEGER, MBENTITYSET, mbImpl, MESHSET_SET, NEUMANN_SET_TAG_NAME, PARALLEL_PARTITION_TAG_NAME, moab::TupleList::print(), moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_size(), procConfig, PSTATUS_NOT_OWNED, rank(), moab::TupleList::resize(), sharedEnts, moab::Range::size(), size(), moab::Interface::tag_get_data(), moab::Interface::tag_get_handle(), moab::Interface::tag_set_data(), moab::Interface::UNION, moab::TupleList::vi_rd, moab::TupleList::vi_wr, moab::TupleList::vul_rd, and moab::TupleList::vul_wr.

Referenced by moab::ReadParallel::load_file().

◆ broadcast_entities()

ErrorCode moab::ParallelComm::broadcast_entities ( const int  from_proc,
Range entities,
const bool  adjacencies = false,
const bool  tags = true 
)

Broadcast all entities resident on from_proc to other processors This function assumes remote handles are not being stored, since (usually) every processor will know about the whole mesh.

Parameters
from_procProcessor having the mesh to be broadcast
entitiesOn return, the entities sent or received in this call
adjacenciesIf true, adjacencies are sent for equiv entities (currently unsupported)
tagsIf true, all non-default-valued tags are sent for sent entities

Definition at line 536 of file ParallelComm.cpp.

540 {
541 #ifndef MOAB_HAVE_MPI
542  return MB_FAILURE;
543 #else
544 
545  ErrorCode result = MB_SUCCESS;
546  int success;
547  int buff_size;
548 
549  Buffer buff( INITIAL_BUFF_SIZE );
550  buff.reset_ptr( sizeof( int ) );
551  if( (int)procConfig.proc_rank() == from_proc )
552  {
553  result = add_verts( entities );MB_CHK_SET_ERR( result, "Failed to add adj vertices" );
554 
555  buff.reset_ptr( sizeof( int ) );
556  result = pack_buffer( entities, adjacencies, tags, false, -1, &buff );MB_CHK_SET_ERR( result, "Failed to compute buffer size in broadcast_entities" );
557  buff.set_stored_size();
558  buff_size = buff.buff_ptr - buff.mem_ptr;
559  }
560 
561  success = MPI_Bcast( &buff_size, 1, MPI_INT, from_proc, procConfig.proc_comm() );
562  if( MPI_SUCCESS != success )
563  {
564  MB_SET_ERR( MB_FAILURE, "MPI_Bcast of buffer size failed" );
565  }
566 
567  if( !buff_size ) // No data
568  return MB_SUCCESS;
569 
570  if( (int)procConfig.proc_rank() != from_proc ) buff.reserve( buff_size );
571 
572  size_t offset = 0;
573  while( buff_size )
574  {
575  int sz = std::min( buff_size, MAX_BCAST_SIZE );
576  success = MPI_Bcast( buff.mem_ptr + offset, sz, MPI_UNSIGNED_CHAR, from_proc, procConfig.proc_comm() );
577  if( MPI_SUCCESS != success )
578  {
579  MB_SET_ERR( MB_FAILURE, "MPI_Bcast of buffer failed" );
580  }
581 
582  offset += sz;
583  buff_size -= sz;
584  }
585 
586  if( (int)procConfig.proc_rank() != from_proc )
587  {
588  std::vector< std::vector< EntityHandle > > dum1a, dum1b;
589  std::vector< std::vector< int > > dum1p;
590  std::vector< EntityHandle > dum2, dum4;
591  std::vector< unsigned int > dum3;
592  buff.reset_ptr( sizeof( int ) );
593  result = unpack_buffer( buff.buff_ptr, false, from_proc, -1, dum1a, dum1b, dum1p, dum2, dum2, dum3, dum4 );MB_CHK_SET_ERR( result, "Failed to unpack buffer in broadcast_entities" );
594  std::copy( dum4.begin(), dum4.end(), range_inserter( entities ) );
595  }
596 
597  return MB_SUCCESS;
598 #endif
599 }

References add_verts(), moab::ParallelComm::Buffer::buff_ptr, entities, ErrorCode, INITIAL_BUFF_SIZE, moab::MAX_BCAST_SIZE, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, moab::ParallelComm::Buffer::mem_ptr, pack_buffer(), moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, moab::ParallelComm::Buffer::reserve(), moab::ParallelComm::Buffer::reset_ptr(), moab::ParallelComm::Buffer::set_stored_size(), and unpack_buffer().

Referenced by moab::ReadParallel::load_file().

◆ buff_procs()

const std::vector< unsigned int > & moab::ParallelComm::buff_procs ( ) const
inline

get buff processor vector

Definition at line 1569 of file ParallelComm.hpp.

1570 {
1571  return buffProcs;
1572 }

References buffProcs.

◆ build_sharedhps_list()

ErrorCode moab::ParallelComm::build_sharedhps_list ( const EntityHandle  entity,
const unsigned char  pstatus,
const int  sharedp,
const std::set< unsigned int > &  procs,
unsigned int &  num_ents,
int *  tmp_procs,
EntityHandle tmp_handles 
)
private

Definition at line 1748 of file ParallelComm.cpp.

1759 {
1760  num_ents = 0;
1761  unsigned char pstat;
1762  ErrorCode result = get_sharing_data( entity, tmp_procs, tmp_handles, pstat, num_ents );MB_CHK_SET_ERR( result, "Failed to get sharing data" );
1763  assert( pstat == pstatus );
1764 
1765  // Build shared proc/handle lists
1766  // Start with multi-shared, since if it is the owner will be first
1767  if( pstatus & PSTATUS_MULTISHARED )
1768  {
1769  }
1770  else if( pstatus & PSTATUS_NOT_OWNED )
1771  {
1772  // If not multishared and not owned, other sharing proc is owner, put that
1773  // one first
1774  assert( "If not owned, I should be shared too" && pstatus & PSTATUS_SHARED && 1 == num_ents );
1775  tmp_procs[1] = procConfig.proc_rank();
1776  tmp_handles[1] = entity;
1777  num_ents = 2;
1778  }
1779  else if( pstatus & PSTATUS_SHARED )
1780  {
1781  // If not multishared and owned, I'm owner
1782  assert( "shared and owned, should be only 1 sharing proc" && 1 == num_ents );
1783  tmp_procs[1] = tmp_procs[0];
1784  tmp_procs[0] = procConfig.proc_rank();
1785  tmp_handles[1] = tmp_handles[0];
1786  tmp_handles[0] = entity;
1787  num_ents = 2;
1788  }
1789  else
1790  {
1791  // Not shared yet, just add owner (me)
1792  tmp_procs[0] = procConfig.proc_rank();
1793  tmp_handles[0] = entity;
1794  num_ents = 1;
1795  }
1796 
1797 #ifndef NDEBUG
1798  int tmp_ps = num_ents;
1799 #endif
1800 
1801  // Now add others, with zero handle for now
1802  for( std::set< unsigned int >::iterator sit = procs.begin(); sit != procs.end(); ++sit )
1803  {
1804 #ifndef NDEBUG
1805  if( tmp_ps && std::find( tmp_procs, tmp_procs + tmp_ps, *sit ) != tmp_procs + tmp_ps )
1806  {
1807  std::cerr << "Trouble with something already in shared list on proc " << procConfig.proc_rank()
1808  << ". Entity:" << std::endl;
1809  list_entities( &entity, 1 );
1810  std::cerr << "pstatus = " << (int)pstatus << ", sharedp = " << sharedp << std::endl;
1811  std::cerr << "tmp_ps = ";
1812  for( int i = 0; i < tmp_ps; i++ )
1813  std::cerr << tmp_procs[i] << " ";
1814  std::cerr << std::endl;
1815  std::cerr << "procs = ";
1816  for( std::set< unsigned int >::iterator sit2 = procs.begin(); sit2 != procs.end(); ++sit2 )
1817  std::cerr << *sit2 << " ";
1818  assert( false );
1819  }
1820 #endif
1821  tmp_procs[num_ents] = *sit;
1822  tmp_handles[num_ents] = 0;
1823  num_ents++;
1824  }
1825 
1826  // Put -1 after procs and 0 after handles
1827  if( MAX_SHARING_PROCS > num_ents )
1828  {
1829  tmp_procs[num_ents] = -1;
1830  tmp_handles[num_ents] = 0;
1831  }
1832 
1833  return MB_SUCCESS;
1834 }

References ErrorCode, get_sharing_data(), list_entities(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, moab::ProcConfig::proc_rank(), procConfig, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, and PSTATUS_SHARED.

Referenced by pack_entities().

◆ check_all_shared_handles() [1/2]

ErrorCode moab::ParallelComm::check_all_shared_handles ( bool  print_em = false)

Call exchange_all_shared_handles, then compare the results with tag data on local shared entities.

Definition at line 8541 of file ParallelComm.cpp.

8542 {
8543  // Get all shared ent data from other procs
8544  std::vector< std::vector< SharedEntityData > > shents( buffProcs.size() ), send_data( buffProcs.size() );
8545 
8546  ErrorCode result;
8547  bool done = false;
8548 
8549  while( !done )
8550  {
8551  result = check_local_shared();
8552  if( MB_SUCCESS != result )
8553  {
8554  done = true;
8555  continue;
8556  }
8557 
8558  result = pack_shared_handles( send_data );
8559  if( MB_SUCCESS != result )
8560  {
8561  done = true;
8562  continue;
8563  }
8564 
8565  result = exchange_all_shared_handles( send_data, shents );
8566  if( MB_SUCCESS != result )
8567  {
8568  done = true;
8569  continue;
8570  }
8571 
8572  if( !shents.empty() ) result = check_my_shared_handles( shents );
8573  done = true;
8574  }
8575 
8576  if( MB_SUCCESS != result && print_em )
8577  {
8578 #ifdef MOAB_HAVE_HDF5
8579  std::ostringstream ent_str;
8580  ent_str << "mesh." << procConfig.proc_rank() << ".h5m";
8581  mbImpl->write_mesh( ent_str.str().c_str() );
8582 #endif
8583  }
8584 
8585  return result;
8586 }

References buffProcs, check_local_shared(), check_my_shared_handles(), ErrorCode, exchange_all_shared_handles(), MB_SUCCESS, mbImpl, pack_shared_handles(), moab::ProcConfig::proc_rank(), procConfig, and moab::Interface::write_mesh().

Referenced by exchange_ghost_cells(), main(), resolve_shared_ents(), moab::ScdInterface::tag_shared_vertices(), and test_intx_in_parallel_elem_based().

◆ check_all_shared_handles() [2/2]

ErrorCode moab::ParallelComm::check_all_shared_handles ( ParallelComm **  pcs,
int  num_pcs 
)
static

Definition at line 8715 of file ParallelComm.cpp.

8716 {
8717  std::vector< std::vector< std::vector< SharedEntityData > > > shents, send_data;
8718  ErrorCode result = MB_SUCCESS, tmp_result;
8719 
8720  // Get all shared ent data from each proc to all other procs
8721  send_data.resize( num_pcs );
8722  for( int p = 0; p < num_pcs; p++ )
8723  {
8724  tmp_result = pcs[p]->pack_shared_handles( send_data[p] );
8725  if( MB_SUCCESS != tmp_result ) result = tmp_result;
8726  }
8727  if( MB_SUCCESS != result ) return result;
8728 
8729  // Move the data sorted by sending proc to data sorted by receiving proc
8730  shents.resize( num_pcs );
8731  for( int p = 0; p < num_pcs; p++ )
8732  shents[p].resize( pcs[p]->buffProcs.size() );
8733 
8734  for( int p = 0; p < num_pcs; p++ )
8735  {
8736  for( unsigned int idx_p = 0; idx_p < pcs[p]->buffProcs.size(); idx_p++ )
8737  {
8738  // Move send_data[p][to_p] to shents[to_p][idx_p]
8739  int to_p = pcs[p]->buffProcs[idx_p];
8740  int top_idx_p = pcs[to_p]->get_buffers( p );
8741  assert( -1 != top_idx_p );
8742  shents[to_p][top_idx_p] = send_data[p][idx_p];
8743  }
8744  }
8745 
8746  for( int p = 0; p < num_pcs; p++ )
8747  {
8748  std::ostringstream ostr;
8749  ostr << "Processor " << p << " bad entities:";
8750  tmp_result = pcs[p]->check_my_shared_handles( shents[p], ostr.str().c_str() );
8751  if( MB_SUCCESS != tmp_result ) result = tmp_result;
8752  }
8753 
8754  return result;
8755 }

References buffProcs, check_my_shared_handles(), ErrorCode, get_buffers(), MB_SUCCESS, and pack_shared_handles().

◆ check_clean_iface()

ErrorCode moab::ParallelComm::check_clean_iface ( Range allsent)
private

Definition at line 6256 of file ParallelComm.cpp.

6257 {
6258  // allsent is all entities I think are on interface; go over them, looking
6259  // for zero-valued handles, and fix any I find
6260 
6261  // Keep lists of entities for which teh sharing data changed, grouped
6262  // by set of sharing procs.
6263  typedef std::map< ProcList, Range > procmap_t;
6264  procmap_t old_procs, new_procs;
6265 
6266  ErrorCode result = MB_SUCCESS;
6267  Range::iterator rit;
6269  unsigned char pstatus;
6270  int nump;
6271  ProcList sharedp;
6273  for( rvit = allsent.rbegin(); rvit != allsent.rend(); ++rvit )
6274  {
6275  result = get_sharing_data( *rvit, sharedp.procs, sharedh, pstatus, nump );MB_CHK_SET_ERR( result, "Failed to get sharing data" );
6276  assert( "Should be shared with at least one other proc" &&
6277  ( nump > 1 || sharedp.procs[0] != (int)procConfig.proc_rank() ) );
6278  assert( nump == MAX_SHARING_PROCS || sharedp.procs[nump] == -1 );
6279 
6280  // Look for first null handle in list
6281  int idx = std::find( sharedh, sharedh + nump, (EntityHandle)0 ) - sharedh;
6282  if( idx == nump ) continue; // All handles are valid
6283 
6284  ProcList old_list( sharedp );
6285  std::sort( old_list.procs, old_list.procs + nump );
6286  old_procs[old_list].insert( *rvit );
6287 
6288  // Remove null handles and corresponding proc ranks from lists
6289  int new_nump = idx;
6290  bool removed_owner = !idx;
6291  for( ++idx; idx < nump; ++idx )
6292  {
6293  if( sharedh[idx] )
6294  {
6295  sharedh[new_nump] = sharedh[idx];
6296  sharedp.procs[new_nump] = sharedp.procs[idx];
6297  ++new_nump;
6298  }
6299  }
6300  sharedp.procs[new_nump] = -1;
6301 
6302  if( removed_owner && new_nump > 1 )
6303  {
6304  // The proc that we choose as the entity owner isn't sharing the
6305  // entity (doesn't have a copy of it). We need to pick a different
6306  // owner. Choose the proc with lowest rank.
6307  idx = std::min_element( sharedp.procs, sharedp.procs + new_nump ) - sharedp.procs;
6308  std::swap( sharedp.procs[0], sharedp.procs[idx] );
6309  std::swap( sharedh[0], sharedh[idx] );
6310  if( sharedp.procs[0] == (int)proc_config().proc_rank() ) pstatus &= ~PSTATUS_NOT_OWNED;
6311  }
6312 
6313  result = set_sharing_data( *rvit, pstatus, nump, new_nump, sharedp.procs, sharedh );MB_CHK_SET_ERR( result, "Failed to set sharing data in check_clean_iface" );
6314 
6315  if( new_nump > 1 )
6316  {
6317  if( new_nump == 2 )
6318  {
6319  if( sharedp.procs[1] != (int)proc_config().proc_rank() )
6320  {
6321  assert( sharedp.procs[0] == (int)proc_config().proc_rank() );
6322  sharedp.procs[0] = sharedp.procs[1];
6323  }
6324  sharedp.procs[1] = -1;
6325  }
6326  else
6327  {
6328  std::sort( sharedp.procs, sharedp.procs + new_nump );
6329  }
6330  new_procs[sharedp].insert( *rvit );
6331  }
6332  }
6333 
6334  if( old_procs.empty() )
6335  {
6336  assert( new_procs.empty() );
6337  return MB_SUCCESS;
6338  }
6339 
6340  // Update interface sets
6341  procmap_t::iterator pmit;
6342  // std::vector<unsigned char> pstatus_list;
6343  rit = interface_sets().begin();
6344  while( rit != interface_sets().end() )
6345  {
6346  result = get_sharing_data( *rit, sharedp.procs, sharedh, pstatus, nump );MB_CHK_SET_ERR( result, "Failed to get sharing data for interface set" );
6347  assert( nump != 2 );
6348  std::sort( sharedp.procs, sharedp.procs + nump );
6349  assert( nump == MAX_SHARING_PROCS || sharedp.procs[nump] == -1 );
6350 
6351  pmit = old_procs.find( sharedp );
6352  if( pmit != old_procs.end() )
6353  {
6354  result = mbImpl->remove_entities( *rit, pmit->second );MB_CHK_SET_ERR( result, "Failed to remove entities from interface set" );
6355  }
6356 
6357  pmit = new_procs.find( sharedp );
6358  if( pmit == new_procs.end() )
6359  {
6360  int count;
6361  result = mbImpl->get_number_entities_by_handle( *rit, count );MB_CHK_SET_ERR( result, "Failed to get number of entities in interface set" );
6362  if( !count )
6363  {
6364  result = mbImpl->delete_entities( &*rit, 1 );MB_CHK_SET_ERR( result, "Failed to delete entities from interface set" );
6365  rit = interface_sets().erase( rit );
6366  }
6367  else
6368  {
6369  ++rit;
6370  }
6371  }
6372  else
6373  {
6374  result = mbImpl->add_entities( *rit, pmit->second );MB_CHK_SET_ERR( result, "Failed to add entities to interface set" );
6375 
6376  // Remove those that we've processed so that we know which ones
6377  // are new.
6378  new_procs.erase( pmit );
6379  ++rit;
6380  }
6381  }
6382 
6383  // Create interface sets for new proc id combinations
6384  std::fill( sharedh, sharedh + MAX_SHARING_PROCS, 0 );
6385  for( pmit = new_procs.begin(); pmit != new_procs.end(); ++pmit )
6386  {
6387  EntityHandle new_set;
6388  result = mbImpl->create_meshset( MESHSET_SET, new_set );MB_CHK_SET_ERR( result, "Failed to create interface set" );
6389  interfaceSets.insert( new_set );
6390 
6391  // Add entities
6392  result = mbImpl->add_entities( new_set, pmit->second );MB_CHK_SET_ERR( result, "Failed to add entities to interface set" );
6393  // Tag set with the proc rank(s)
6394  assert( pmit->first.procs[0] >= 0 );
6395  pstatus = PSTATUS_SHARED | PSTATUS_INTERFACE;
6396  if( pmit->first.procs[1] == -1 )
6397  {
6398  int other = pmit->first.procs[0];
6399  assert( other != (int)procConfig.proc_rank() );
6400  result = mbImpl->tag_set_data( sharedp_tag(), &new_set, 1, pmit->first.procs );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
6401  sharedh[0] = 0;
6402  result = mbImpl->tag_set_data( sharedh_tag(), &new_set, 1, sharedh );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
6403  if( other < (int)proc_config().proc_rank() ) pstatus |= PSTATUS_NOT_OWNED;
6404  }
6405  else
6406  {
6407  result = mbImpl->tag_set_data( sharedps_tag(), &new_set, 1, pmit->first.procs );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
6408  result = mbImpl->tag_set_data( sharedhs_tag(), &new_set, 1, sharedh );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
6409  pstatus |= PSTATUS_MULTISHARED;
6410  if( pmit->first.procs[0] < (int)proc_config().proc_rank() ) pstatus |= PSTATUS_NOT_OWNED;
6411  }
6412 
6413  result = mbImpl->tag_set_data( pstatus_tag(), &new_set, 1, &pstatus );MB_CHK_SET_ERR( result, "Failed to tag interface set with pstatus" );
6414 
6415  // Set pstatus on all interface entities in set
6416  result = mbImpl->tag_clear_data( pstatus_tag(), pmit->second, &pstatus );MB_CHK_SET_ERR( result, "Failed to tag interface entities with pstatus" );
6417  }
6418 
6419  return MB_SUCCESS;
6420 }

References moab::Interface::add_entities(), moab::Range::begin(), moab::Interface::create_meshset(), moab::Interface::delete_entities(), moab::Range::erase(), ErrorCode, moab::Interface::get_number_entities_by_handle(), get_sharing_data(), moab::Range::insert(), interface_sets(), interfaceSets, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, MESHSET_SET, proc_config(), moab::ProcConfig::proc_rank(), procConfig, moab::ProcList::procs, PSTATUS_INTERFACE, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, pstatus_tag(), moab::Range::rbegin(), moab::Interface::remove_entities(), moab::Range::rend(), set_sharing_data(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), moab::Interface::tag_clear_data(), and moab::Interface::tag_set_data().

Referenced by exchange_ghost_cells().

◆ check_global_ids()

ErrorCode moab::ParallelComm::check_global_ids ( EntityHandle  this_set,
const int  dimension,
const int  start_id = 1,
const bool  largest_dim_only = true,
const bool  parallel = true,
const bool  owned_only = false 
)

check for global ids; based only on tag handle being there or not; if it's not there, create them for the specified dimensions

Parameters
owned_onlyIf true, do not get global IDs for non-owned entities from remote processors.

Definition at line 5532 of file ParallelComm.cpp.

5538 {
5539  // Global id tag
5540  Tag gid_tag = mbImpl->globalId_tag();
5541  int def_val = -1;
5542  Range dum_range;
5543 
5544  void* tag_ptr = &def_val;
5545  ErrorCode result = mbImpl->get_entities_by_type_and_tag( this_set, MBVERTEX, &gid_tag, &tag_ptr, 1, dum_range );MB_CHK_SET_ERR( result, "Failed to get entities by MBVERTEX type and gid tag" );
5546 
5547  if( !dum_range.empty() )
5548  {
5549  // Just created it, so we need global ids
5550  result = assign_global_ids( this_set, dimension, start_id, largest_dim_only, parallel, owned_only );MB_CHK_SET_ERR( result, "Failed assigning global ids" );
5551  }
5552 
5553  return MB_SUCCESS;
5554 }

References assign_global_ids(), moab::Range::empty(), ErrorCode, moab::Interface::get_entities_by_type_and_tag(), moab::Interface::globalId_tag(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, and MBVERTEX.

Referenced by moab::ReadParallel::load_file().

◆ check_local_shared()

ErrorCode moab::ParallelComm::check_local_shared ( )

Definition at line 8588 of file ParallelComm.cpp.

8589 {
8590  // Do some checks on shared entities to make sure things look
8591  // consistent
8592 
8593  // Check that non-vertex shared entities are shared by same procs as all
8594  // their vertices
8595  // std::pair<Range::const_iterator,Range::const_iterator> vert_it =
8596  // sharedEnts.equal_range(MBVERTEX);
8597  std::vector< EntityHandle > dum_connect;
8598  const EntityHandle* connect;
8599  int num_connect;
8600  int tmp_procs[MAX_SHARING_PROCS];
8602  std::set< int > tmp_set, vset;
8603  int num_ps;
8604  ErrorCode result;
8605  unsigned char pstat;
8606  std::vector< EntityHandle > bad_ents;
8607  std::vector< std::string > errors;
8608 
8609  std::set< EntityHandle >::iterator vit;
8610  for( vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit )
8611  {
8612  // Get sharing procs for this ent
8613  result = get_sharing_data( *vit, tmp_procs, tmp_hs, pstat, num_ps );
8614  if( MB_SUCCESS != result )
8615  {
8616  bad_ents.push_back( *vit );
8617  errors.push_back( std::string( "Failure getting sharing data." ) );
8618  continue;
8619  }
8620 
8621  bool bad = false;
8622  // Entity must be shared
8623  if( !( pstat & PSTATUS_SHARED ) )
8624  errors.push_back( std::string( "Entity should be shared but isn't." ) ), bad = true;
8625 
8626  // If entity is not owned this must not be first proc
8627  if( pstat & PSTATUS_NOT_OWNED && tmp_procs[0] == (int)procConfig.proc_rank() )
8628  errors.push_back( std::string( "Entity not owned but is first proc." ) ), bad = true;
8629 
8630  // If entity is owned and multishared, this must be first proc
8631  if( !( pstat & PSTATUS_NOT_OWNED ) && pstat & PSTATUS_MULTISHARED &&
8632  ( tmp_procs[0] != (int)procConfig.proc_rank() || tmp_hs[0] != *vit ) )
8633  errors.push_back( std::string( "Entity owned and multishared but not first proc or not first handle." ) ),
8634  bad = true;
8635 
8636  if( bad )
8637  {
8638  bad_ents.push_back( *vit );
8639  continue;
8640  }
8641 
8642  EntityType type = mbImpl->type_from_handle( *vit );
8643  if( type == MBVERTEX || type == MBENTITYSET ) continue;
8644 
8645  // Copy element's procs to vset and save size
8646  int orig_ps = num_ps;
8647  vset.clear();
8648  std::copy( tmp_procs, tmp_procs + num_ps, std::inserter( vset, vset.begin() ) );
8649 
8650  // Get vertices for this ent and intersection of sharing procs
8651  result = mbImpl->get_connectivity( *vit, connect, num_connect, false, &dum_connect );
8652  if( MB_SUCCESS != result )
8653  {
8654  bad_ents.push_back( *vit );
8655  errors.push_back( std::string( "Failed to get connectivity." ) );
8656  continue;
8657  }
8658 
8659  for( int i = 0; i < num_connect; i++ )
8660  {
8661  result = get_sharing_data( connect[i], tmp_procs, NULL, pstat, num_ps );
8662  if( MB_SUCCESS != result )
8663  {
8664  bad_ents.push_back( *vit );
8665  continue;
8666  }
8667  if( !num_ps )
8668  {
8669  vset.clear();
8670  break;
8671  }
8672  std::sort( tmp_procs, tmp_procs + num_ps );
8673  tmp_set.clear();
8674  std::set_intersection( tmp_procs, tmp_procs + num_ps, vset.begin(), vset.end(),
8675  std::inserter( tmp_set, tmp_set.end() ) );
8676  vset.swap( tmp_set );
8677  if( vset.empty() ) break;
8678  }
8679 
8680  // Intersect them; should be the same size as orig_ps
8681  tmp_set.clear();
8682  std::set_intersection( tmp_procs, tmp_procs + num_ps, vset.begin(), vset.end(),
8683  std::inserter( tmp_set, tmp_set.end() ) );
8684  if( orig_ps != (int)tmp_set.size() )
8685  {
8686  errors.push_back( std::string( "Vertex proc set not same size as entity proc set." ) );
8687  bad_ents.push_back( *vit );
8688  for( int i = 0; i < num_connect; i++ )
8689  {
8690  bad_ents.push_back( connect[i] );
8691  errors.push_back( std::string( "vertex in connect" ) );
8692  }
8693  }
8694  }
8695 
8696  if( !bad_ents.empty() )
8697  {
8698  std::cout << "Found bad entities in check_local_shared, proc rank " << procConfig.proc_rank() << ","
8699  << std::endl;
8700  std::vector< std::string >::iterator sit;
8701  std::vector< EntityHandle >::iterator rit;
8702  for( rit = bad_ents.begin(), sit = errors.begin(); rit != bad_ents.end(); ++rit, ++sit )
8703  {
8704  list_entities( &( *rit ), 1 );
8705  std::cout << "Reason: " << *sit << std::endl;
8706  }
8707  return MB_FAILURE;
8708  }
8709 
8710  // To do: check interface sets
8711 
8712  return MB_SUCCESS;
8713 }

References ErrorCode, moab::Interface::get_connectivity(), get_sharing_data(), list_entities(), MAX_SHARING_PROCS, MB_SUCCESS, MBENTITYSET, mbImpl, MBVERTEX, moab::ProcConfig::proc_rank(), procConfig, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, sharedEnts, and moab::Interface::type_from_handle().

Referenced by check_all_shared_handles().

◆ check_my_shared_handles()

ErrorCode moab::ParallelComm::check_my_shared_handles ( std::vector< std::vector< SharedEntityData > > &  shents,
const char *  prefix = NULL 
)

Definition at line 8757 of file ParallelComm.cpp.

8759 {
8760  // Now check against what I think data should be
8761  // Get all shared entities
8762  ErrorCode result;
8763  Range all_shared;
8764  std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( all_shared ) );
8765  std::vector< EntityHandle > dum_vec;
8766  all_shared.erase( all_shared.upper_bound( MBPOLYHEDRON ), all_shared.end() );
8767 
8768  Range bad_ents, local_shared;
8769  std::vector< SharedEntityData >::iterator vit;
8770  unsigned char tmp_pstat;
8771  for( unsigned int i = 0; i < shents.size(); i++ )
8772  {
8773  int other_proc = buffProcs[i];
8774  result = get_shared_entities( other_proc, local_shared );
8775  if( MB_SUCCESS != result ) return result;
8776  for( vit = shents[i].begin(); vit != shents[i].end(); ++vit )
8777  {
8778  EntityHandle localh = vit->local, remoteh = vit->remote, dumh;
8779  local_shared.erase( localh );
8780  result = get_remote_handles( true, &localh, &dumh, 1, other_proc, dum_vec );
8781  if( MB_SUCCESS != result || dumh != remoteh ) bad_ents.insert( localh );
8782  result = get_pstatus( localh, tmp_pstat );
8783  if( MB_SUCCESS != result || ( !( tmp_pstat & PSTATUS_NOT_OWNED ) && (unsigned)vit->owner != rank() ) ||
8784  ( tmp_pstat & PSTATUS_NOT_OWNED && (unsigned)vit->owner == rank() ) )
8785  bad_ents.insert( localh );
8786  }
8787 
8788  if( !local_shared.empty() ) bad_ents.merge( local_shared );
8789  }
8790 
8791  if( !bad_ents.empty() )
8792  {
8793  if( prefix ) std::cout << prefix << std::endl;
8794  list_entities( bad_ents );
8795  return MB_FAILURE;
8796  }
8797  else
8798  return MB_SUCCESS;
8799 }

References buffProcs, moab::Range::empty(), moab::Range::end(), moab::Range::erase(), ErrorCode, get_pstatus(), get_remote_handles(), get_shared_entities(), moab::Range::insert(), list_entities(), MB_SUCCESS, MBPOLYHEDRON, moab::Range::merge(), PSTATUS_NOT_OWNED, rank(), sharedEnts, and moab::Range::upper_bound().

Referenced by check_all_shared_handles().

◆ check_sent_ents()

ErrorCode moab::ParallelComm::check_sent_ents ( Range allsent)
private

check entities to make sure there are no zero-valued remote handles where they shouldn't be

Definition at line 7323 of file ParallelComm.cpp.

7324 {
7325  // Check entities to make sure there are no zero-valued remote handles
7326  // where they shouldn't be
7327  std::vector< unsigned char > pstat( allsent.size() );
7328  ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), allsent, &pstat[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" );
7329  std::vector< EntityHandle > handles( allsent.size() );
7330  result = mbImpl->tag_get_data( sharedh_tag(), allsent, &handles[0] );MB_CHK_SET_ERR( result, "Failed to get sharedh tag data" );
7331  std::vector< int > procs( allsent.size() );
7332  result = mbImpl->tag_get_data( sharedp_tag(), allsent, &procs[0] );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" );
7333 
7334  Range bad_entities;
7335 
7336  Range::iterator rit;
7337  unsigned int i;
7339  int dum_ps[MAX_SHARING_PROCS];
7340 
7341  for( rit = allsent.begin(), i = 0; rit != allsent.end(); ++rit, i++ )
7342  {
7343  if( -1 != procs[i] && 0 == handles[i] )
7344  bad_entities.insert( *rit );
7345  else
7346  {
7347  // Might be multi-shared...
7348  result = mbImpl->tag_get_data( sharedps_tag(), &( *rit ), 1, dum_ps );
7349  if( MB_TAG_NOT_FOUND == result )
7350  continue;
7351  else if( MB_SUCCESS != result )
7352  MB_SET_ERR( result, "Failed to get sharedps tag data" );
7353  result = mbImpl->tag_get_data( sharedhs_tag(), &( *rit ), 1, dum_hs );MB_CHK_SET_ERR( result, "Failed to get sharedhs tag data" );
7354 
7355  // Find first non-set proc
7356  int* ns_proc = std::find( dum_ps, dum_ps + MAX_SHARING_PROCS, -1 );
7357  int num_procs = ns_proc - dum_ps;
7358  assert( num_procs <= MAX_SHARING_PROCS );
7359  // Now look for zero handles in active part of dum_hs
7360  EntityHandle* ns_handle = std::find( dum_hs, dum_hs + num_procs, 0 );
7361  int num_handles = ns_handle - dum_hs;
7362  assert( num_handles <= num_procs );
7363  if( num_handles != num_procs ) bad_entities.insert( *rit );
7364  }
7365  }
7366 
7367  return MB_SUCCESS;
7368 }

References moab::Range::begin(), moab::Range::end(), ErrorCode, moab::Range::insert(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, MB_TAG_NOT_FOUND, mbImpl, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), moab::Range::size(), and moab::Interface::tag_get_data().

Referenced by exchange_ghost_cells(), and exchange_owned_mesh().

◆ clean_shared_tags()

ErrorCode moab::ParallelComm::clean_shared_tags ( std::vector< Range * > &  exchange_ents)

Definition at line 8842 of file ParallelComm.cpp.

8843 {
8844  for( unsigned int i = 0; i < exchange_ents.size(); i++ )
8845  {
8846  Range* ents = exchange_ents[i];
8847  int num_ents = ents->size();
8848  Range::iterator it = ents->begin();
8849 
8850  for( int n = 0; n < num_ents; n++ )
8851  {
8852  int sharing_proc;
8853  ErrorCode result = mbImpl->tag_get_data( sharedp_tag(), &( *ents->begin() ), 1, &sharing_proc );
8854  if( result != MB_TAG_NOT_FOUND && sharing_proc == -1 )
8855  {
8856  result = mbImpl->tag_delete_data( sharedp_tag(), &( *it ), 1 );MB_CHK_SET_ERR( result, "Failed to delete sharedp tag data" );
8857  result = mbImpl->tag_delete_data( sharedh_tag(), &( *it ), 1 );MB_CHK_SET_ERR( result, "Failed to delete sharedh tag data" );
8858  result = mbImpl->tag_delete_data( pstatus_tag(), &( *it ), 1 );MB_CHK_SET_ERR( result, "Failed to delete pstatus tag data" );
8859  }
8860  ++it;
8861  }
8862  }
8863 
8864  return MB_SUCCESS;
8865 }

References moab::Range::begin(), ErrorCode, MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_NOT_FOUND, mbImpl, pstatus_tag(), sharedh_tag(), sharedp_tag(), moab::Range::size(), moab::Interface::tag_delete_data(), and moab::Interface::tag_get_data().

◆ collective_sync_partition()

ErrorCode moab::ParallelComm::collective_sync_partition ( )

Definition at line 8268 of file ParallelComm.cpp.

8269 {
8270  int count = partition_sets().size();
8271  globalPartCount = 0;
8272  int err = MPI_Allreduce( &count, &globalPartCount, 1, MPI_INT, MPI_SUM, proc_config().proc_comm() );
8273  return err ? MB_FAILURE : MB_SUCCESS;
8274 }

References globalPartCount, MB_SUCCESS, partition_sets(), proc_config(), and moab::Range::size().

◆ comm()

◆ correct_thin_ghost_layers()

ErrorCode moab::ParallelComm::correct_thin_ghost_layers ( )

Definition at line 9346 of file ParallelComm.cpp.

9347 {
9348 
9349  // Get all shared ent data from other procs
9350  std::vector< std::vector< SharedEntityData > > shents( buffProcs.size() ), send_data( buffProcs.size() );
9351 
9352  // will work only on multi-shared tags sharedps_tag(), sharedhs_tag();
9353 
9354  /*
9355  * domain0 | domain1 | domain2 | domain3
9356  * vertices from domain 1 and 2 are visible from both 0 and 3, but
9357  * domain 0 might not have info about multi-sharing from domain 3
9358  * so we will force that domain 0 vertices owned by 1 and 2 have information
9359  * about the domain 3 sharing
9360  *
9361  * SharedEntityData will have :
9362  * struct SharedEntityData {
9363  EntityHandle local; // this is same meaning, for the proc we sent to, it is local
9364  EntityHandle remote; // this will be the far away handle that will need to be added
9365  EntityID owner; // this will be the remote proc
9366  };
9367  // so we need to add data like this:
9368  a multishared entity owned by proc x will have data like
9369  multishared procs: proc x, a, b, c
9370  multishared handles: h1, h2, h3, h4
9371  we will need to send data from proc x like this:
9372  to proc a we will send
9373  (h2, h3, b), (h2, h4, c)
9374  to proc b we will send
9375  (h3, h2, a), (h3, h4, c)
9376  to proc c we will send
9377  (h4, h2, a), (h4, h3, b)
9378  *
9379  */
9380 
9381  ErrorCode result = MB_SUCCESS;
9382  int ent_procs[MAX_SHARING_PROCS + 1];
9383  EntityHandle handles[MAX_SHARING_PROCS + 1];
9384  int num_sharing;
9385  SharedEntityData tmp;
9386 
9387  for( std::set< EntityHandle >::iterator i = sharedEnts.begin(); i != sharedEnts.end(); ++i )
9388  {
9389 
9390  unsigned char pstat;
9391  result = get_sharing_data( *i, ent_procs, handles, pstat, num_sharing );MB_CHK_SET_ERR( result, "can't get sharing data" );
9392  if( !( pstat & PSTATUS_MULTISHARED ) ||
9393  num_sharing <= 2 ) // if not multishared, skip, it should have no problems
9394  continue;
9395  // we should skip the ones that are not owned locally
9396  // the owned ones will have the most multi-shared info, because the info comes from other
9397  // remote processors
9398  if( pstat & PSTATUS_NOT_OWNED ) continue;
9399  for( int j = 1; j < num_sharing; j++ )
9400  {
9401  // we will send to proc
9402  int send_to_proc = ent_procs[j]; //
9403  tmp.local = handles[j];
9404  int ind = get_buffers( send_to_proc );
9405  assert( -1 != ind ); // THIS SHOULD NEVER HAPPEN
9406  for( int k = 1; k < num_sharing; k++ )
9407  {
9408  // do not send to self proc
9409  if( j == k ) continue;
9410  tmp.remote = handles[k]; // this will be the handle of entity on proc
9411  tmp.owner = ent_procs[k];
9412  send_data[ind].push_back( tmp );
9413  }
9414  }
9415  }
9416 
9417  result = exchange_all_shared_handles( send_data, shents );MB_CHK_ERR( result );
9418 
9419  // loop over all shents and add if vertex type, add if missing
9420  for( size_t i = 0; i < shents.size(); i++ )
9421  {
9422  std::vector< SharedEntityData >& shEnts = shents[i];
9423  for( size_t j = 0; j < shEnts.size(); j++ )
9424  {
9425  tmp = shEnts[j];
9426  // basically, check the shared data for tmp.local entity
9427  // it should have inside the tmp.owner and tmp.remote
9428  EntityHandle eh = tmp.local;
9429  unsigned char pstat;
9430  result = get_sharing_data( eh, ent_procs, handles, pstat, num_sharing );MB_CHK_SET_ERR( result, "can't get sharing data" );
9431  // see if the proc tmp.owner is in the list of ent_procs; if not, we have to increase
9432  // handles, and ent_procs; and set
9433 
9434  int proc_remote = tmp.owner; //
9435  if( std::find( ent_procs, ent_procs + num_sharing, proc_remote ) == ent_procs + num_sharing )
9436  {
9437  // so we did not find on proc
9438 #ifndef NDEBUG
9439  std::cout << "THIN GHOST: we did not find on proc " << rank() << " for shared ent " << eh
9440  << " the proc " << proc_remote << "\n";
9441 #endif
9442  // increase num_sharing, and set the multi-shared tags
9443  if( num_sharing >= MAX_SHARING_PROCS ) return MB_FAILURE;
9444  handles[num_sharing] = tmp.remote;
9445  handles[num_sharing + 1] = 0; // end of list
9446  ent_procs[num_sharing] = tmp.owner;
9447  ent_procs[num_sharing + 1] = -1; // this should be already set
9448  result = mbImpl->tag_set_data( sharedps_tag(), &eh, 1, ent_procs );MB_CHK_SET_ERR( result, "Failed to set sharedps tag data" );
9449  result = mbImpl->tag_set_data( sharedhs_tag(), &eh, 1, handles );MB_CHK_SET_ERR( result, "Failed to set sharedhs tag data" );
9450  if( 2 == num_sharing ) // it means the sharedp and sharedh tags were set with a
9451  // value non default
9452  {
9453  // so entity eh was simple shared before, we need to set those dense tags back
9454  // to default
9455  // values
9456  EntityHandle zero = 0;
9457  int no_proc = -1;
9458  result = mbImpl->tag_set_data( sharedp_tag(), &eh, 1, &no_proc );MB_CHK_SET_ERR( result, "Failed to set sharedp tag data" );
9459  result = mbImpl->tag_set_data( sharedh_tag(), &eh, 1, &zero );MB_CHK_SET_ERR( result, "Failed to set sharedh tag data" );
9460  // also, add multishared pstatus tag
9461  // also add multishared status to pstatus
9462  pstat = pstat | PSTATUS_MULTISHARED;
9463  result = mbImpl->tag_set_data( pstatus_tag(), &eh, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to set pstatus tag data" );
9464  }
9465  }
9466  }
9467  }
9468  return MB_SUCCESS;
9469 }

References buffProcs, ErrorCode, exchange_all_shared_handles(), get_buffers(), get_sharing_data(), moab::ParallelComm::SharedEntityData::local, MAX_SHARING_PROCS, MB_CHK_ERR, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::ParallelComm::SharedEntityData::owner, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, pstatus_tag(), rank(), moab::ParallelComm::SharedEntityData::remote, sharedEnts, sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), and moab::Interface::tag_set_data().

Referenced by moab::ReadParallel::load_file().

◆ create_iface_pc_links()

ErrorCode moab::ParallelComm::create_iface_pc_links ( )
private

Definition at line 5093 of file ParallelComm.cpp.

5094 {
5095  // Now that we've resolved the entities in the iface sets,
5096  // set parent/child links between the iface sets
5097 
5098  // First tag all entities in the iface sets
5099  Tag tmp_iface_tag;
5100  EntityHandle tmp_iface_set = 0;
5101  ErrorCode result = mbImpl->tag_get_handle( "__tmp_iface", 1, MB_TYPE_HANDLE, tmp_iface_tag,
5102  MB_TAG_DENSE | MB_TAG_CREAT, &tmp_iface_set );MB_CHK_SET_ERR( result, "Failed to create temporary interface set tag" );
5103 
5104  Range iface_ents;
5105  std::vector< EntityHandle > tag_vals;
5106  Range::iterator rit;
5107 
5108  for( rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit )
5109  {
5110  // tag entities with interface set
5111  iface_ents.clear();
5112  result = mbImpl->get_entities_by_handle( *rit, iface_ents );MB_CHK_SET_ERR( result, "Failed to get entities in interface set" );
5113 
5114  if( iface_ents.empty() ) continue;
5115 
5116  tag_vals.resize( iface_ents.size() );
5117  std::fill( tag_vals.begin(), tag_vals.end(), *rit );
5118  result = mbImpl->tag_set_data( tmp_iface_tag, iface_ents, &tag_vals[0] );MB_CHK_SET_ERR( result, "Failed to tag iface entities with interface set" );
5119  }
5120 
5121  // Now go back through interface sets and add parent/child links
5122  Range tmp_ents2;
5123  for( int d = 2; d >= 0; d-- )
5124  {
5125  for( rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit )
5126  {
5127  // Get entities on this interface
5128  iface_ents.clear();
5129  result = mbImpl->get_entities_by_handle( *rit, iface_ents, true );MB_CHK_SET_ERR( result, "Failed to get entities by handle" );
5130  if( iface_ents.empty() || mbImpl->dimension_from_handle( *iface_ents.rbegin() ) != d ) continue;
5131 
5132  // Get higher-dimensional entities and their interface sets
5133  result = mbImpl->get_adjacencies( &( *iface_ents.begin() ), 1, d + 1, false, tmp_ents2 );MB_CHK_SET_ERR( result, "Failed to get adjacencies for interface sets" );
5134  tag_vals.resize( tmp_ents2.size() );
5135  result = mbImpl->tag_get_data( tmp_iface_tag, tmp_ents2, &tag_vals[0] );MB_CHK_SET_ERR( result, "Failed to get tmp iface tag for interface sets" );
5136 
5137  // Go through and for any on interface make it a parent
5138  EntityHandle last_set = 0;
5139  for( unsigned int i = 0; i < tag_vals.size(); i++ )
5140  {
5141  if( tag_vals[i] && tag_vals[i] != last_set )
5142  {
5143  result = mbImpl->add_parent_child( tag_vals[i], *rit );MB_CHK_SET_ERR( result, "Failed to add parent/child link for interface set" );
5144  last_set = tag_vals[i];
5145  }
5146  }
5147  }
5148  }
5149 
5150  // Delete the temporary tag
5151  result = mbImpl->tag_delete( tmp_iface_tag );MB_CHK_SET_ERR( result, "Failed to delete tmp iface tag" );
5152 
5153  return MB_SUCCESS;
5154 }

References moab::Interface::add_parent_child(), moab::Range::begin(), moab::Range::clear(), moab::Interface::dimension_from_handle(), moab::Range::empty(), moab::Range::end(), ErrorCode, moab::Interface::get_adjacencies(), moab::Interface::get_entities_by_handle(), interfaceSets, MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_DENSE, MB_TYPE_HANDLE, mbImpl, moab::Range::rbegin(), moab::Range::size(), moab::Interface::tag_delete(), moab::Interface::tag_get_data(), moab::Interface::tag_get_handle(), and moab::Interface::tag_set_data().

Referenced by resolve_shared_ents(), and moab::ParallelMergeMesh::TagSharedElements().

◆ create_interface_sets() [1/2]

ErrorCode moab::ParallelComm::create_interface_sets ( EntityHandle  this_set,
int  resolve_dim,
int  shared_dim 
)

Definition at line 4981 of file ParallelComm.cpp.

4982 {
4983  std::map< std::vector< int >, std::vector< EntityHandle > > proc_nvecs;
4984 
4985  // Build up the list of shared entities
4986  int procs[MAX_SHARING_PROCS];
4988  ErrorCode result;
4989  int nprocs;
4990  unsigned char pstat;
4991  for( std::set< EntityHandle >::iterator vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit )
4992  {
4993  if( shared_dim != -1 && mbImpl->dimension_from_handle( *vit ) > shared_dim ) continue;
4994  result = get_sharing_data( *vit, procs, handles, pstat, nprocs );MB_CHK_SET_ERR( result, "Failed to get sharing data" );
4995  std::sort( procs, procs + nprocs );
4996  std::vector< int > tmp_procs( procs, procs + nprocs );
4997  assert( tmp_procs.size() != 2 );
4998  proc_nvecs[tmp_procs].push_back( *vit );
4999  }
5000 
5001  Skinner skinner( mbImpl );
5002  Range skin_ents[4];
5003  result = mbImpl->get_entities_by_dimension( this_set, resolve_dim, skin_ents[resolve_dim] );MB_CHK_SET_ERR( result, "Failed to get skin entities by dimension" );
5004  result =
5005  skinner.find_skin( this_set, skin_ents[resolve_dim], false, skin_ents[resolve_dim - 1], 0, true, true, true );MB_CHK_SET_ERR( result, "Failed to find skin" );
5006  if( shared_dim > 1 )
5007  {
5008  result = mbImpl->get_adjacencies( skin_ents[resolve_dim - 1], resolve_dim - 2, true, skin_ents[resolve_dim - 2],
5009  Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get skin adjacencies" );
5010  }
5011 
5012  result = get_proc_nvecs( resolve_dim, shared_dim, skin_ents, proc_nvecs );
5013 
5014  return create_interface_sets( proc_nvecs );
5015 }

References create_interface_sets(), moab::Interface::dimension_from_handle(), ErrorCode, moab::Skinner::find_skin(), moab::Interface::get_adjacencies(), moab::Interface::get_entities_by_dimension(), get_proc_nvecs(), get_sharing_data(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, mbImpl, sharedEnts, and moab::Interface::UNION.

◆ create_interface_sets() [2/2]

ErrorCode moab::ParallelComm::create_interface_sets ( std::map< std::vector< int >, std::vector< EntityHandle > > &  proc_nvecs)

Definition at line 5017 of file ParallelComm.cpp.

5018 {
5019  if( proc_nvecs.empty() ) return MB_SUCCESS;
5020 
5021  int proc_ids[MAX_SHARING_PROCS];
5022  EntityHandle proc_handles[MAX_SHARING_PROCS];
5023  Tag shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag;
5024  ErrorCode result = get_shared_proc_tags( shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag );MB_CHK_SET_ERR( result, "Failed to get shared proc tags in create_interface_sets" );
5025  Range::iterator rit;
5026 
5027  // Create interface sets, tag them, and tag their contents with iface set tag
5028  std::vector< unsigned char > pstatus;
5029  for( std::map< std::vector< int >, std::vector< EntityHandle > >::iterator vit = proc_nvecs.begin();
5030  vit != proc_nvecs.end(); ++vit )
5031  {
5032  // Create the set
5033  EntityHandle new_set;
5034  result = mbImpl->create_meshset( MESHSET_SET, new_set );MB_CHK_SET_ERR( result, "Failed to create interface set" );
5035  interfaceSets.insert( new_set );
5036 
5037  // Add entities
5038  assert( !vit->second.empty() );
5039  result = mbImpl->add_entities( new_set, &( vit->second )[0], ( vit->second ).size() );MB_CHK_SET_ERR( result, "Failed to add entities to interface set" );
5040  // Tag set with the proc rank(s)
5041  if( vit->first.size() == 1 )
5042  {
5043  assert( ( vit->first )[0] != (int)procConfig.proc_rank() );
5044  result = mbImpl->tag_set_data( shp_tag, &new_set, 1, &( vit->first )[0] );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
5045  proc_handles[0] = 0;
5046  result = mbImpl->tag_set_data( shh_tag, &new_set, 1, proc_handles );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
5047  }
5048  else
5049  {
5050  // Pad tag data out to MAX_SHARING_PROCS with -1
5051  if( vit->first.size() > MAX_SHARING_PROCS )
5052  {
5053  std::cerr << "Exceeded MAX_SHARING_PROCS for " << CN::EntityTypeName( TYPE_FROM_HANDLE( new_set ) )
5054  << ' ' << ID_FROM_HANDLE( new_set ) << " on process " << proc_config().proc_rank()
5055  << std::endl;
5056  std::cerr.flush();
5057  MPI_Abort( proc_config().proc_comm(), 66 );
5058  }
5059  // assert(vit->first.size() <= MAX_SHARING_PROCS);
5060  std::copy( vit->first.begin(), vit->first.end(), proc_ids );
5061  std::fill( proc_ids + vit->first.size(), proc_ids + MAX_SHARING_PROCS, -1 );
5062  result = mbImpl->tag_set_data( shps_tag, &new_set, 1, proc_ids );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
5063  unsigned int ind = std::find( proc_ids, proc_ids + vit->first.size(), procConfig.proc_rank() ) - proc_ids;
5064  assert( ind < vit->first.size() );
5065  std::fill( proc_handles, proc_handles + MAX_SHARING_PROCS, 0 );
5066  proc_handles[ind] = new_set;
5067  result = mbImpl->tag_set_data( shhs_tag, &new_set, 1, proc_handles );MB_CHK_SET_ERR( result, "Failed to tag interface set with procs" );
5068  }
5069 
5070  // Get the owning proc, then set the pstatus tag on iface set
5071  int min_proc = ( vit->first )[0];
5072  unsigned char pval = ( PSTATUS_SHARED | PSTATUS_INTERFACE );
5073  if( min_proc < (int)procConfig.proc_rank() ) pval |= PSTATUS_NOT_OWNED;
5074  if( vit->first.size() > 1 ) pval |= PSTATUS_MULTISHARED;
5075  result = mbImpl->tag_set_data( pstat_tag, &new_set, 1, &pval );MB_CHK_SET_ERR( result, "Failed to tag interface set with pstatus" );
5076 
5077  // Tag the vertices with the same thing
5078  pstatus.clear();
5079  std::vector< EntityHandle > verts;
5080  for( std::vector< EntityHandle >::iterator v2it = ( vit->second ).begin(); v2it != ( vit->second ).end();
5081  ++v2it )
5082  if( mbImpl->type_from_handle( *v2it ) == MBVERTEX ) verts.push_back( *v2it );
5083  pstatus.resize( verts.size(), pval );
5084  if( !verts.empty() )
5085  {
5086  result = mbImpl->tag_set_data( pstat_tag, &verts[0], verts.size(), &pstatus[0] );MB_CHK_SET_ERR( result, "Failed to tag interface set vertices with pstatus" );
5087  }
5088  }
5089 
5090  return MB_SUCCESS;
5091 }

References moab::Interface::add_entities(), moab::Interface::create_meshset(), moab::CN::EntityTypeName(), ErrorCode, moab::GeomUtil::first(), get_shared_proc_tags(), moab::ID_FROM_HANDLE(), moab::Range::insert(), interfaceSets, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, MBVERTEX, MESHSET_SET, proc_config(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_INTERFACE, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, moab::Interface::tag_set_data(), moab::Interface::type_from_handle(), and moab::TYPE_FROM_HANDLE().

Referenced by create_interface_sets(), exchange_owned_meshs(), resolve_shared_ents(), moab::ScdInterface::tag_shared_vertices(), and moab::ParallelMergeMesh::TagSharedElements().

◆ create_part()

ErrorCode moab::ParallelComm::create_part ( EntityHandle part_out)

Definition at line 8210 of file ParallelComm.cpp.

8211 {
8212  // Mark as invalid so we know that it needs to be updated
8213  globalPartCount = -1;
8214 
8215  // Create set representing part
8216  ErrorCode rval = mbImpl->create_meshset( MESHSET_SET, set_out );
8217  if( MB_SUCCESS != rval ) return rval;
8218 
8219  // Set tag on set
8220  int val = proc_config().proc_rank();
8221  rval = mbImpl->tag_set_data( part_tag(), &set_out, 1, &val );
8222 
8223  if( MB_SUCCESS != rval )
8224  {
8225  mbImpl->delete_entities( &set_out, 1 );
8226  return rval;
8227  }
8228 
8229  if( get_partitioning() )
8230  {
8231  rval = mbImpl->add_entities( get_partitioning(), &set_out, 1 );
8232  if( MB_SUCCESS != rval )
8233  {
8234  mbImpl->delete_entities( &set_out, 1 );
8235  return rval;
8236  }
8237  }
8238 
8239  moab::Range& pSets = this->partition_sets();
8240  if( pSets.index( set_out ) < 0 )
8241  {
8242  pSets.insert( set_out );
8243  }
8244 
8245  return MB_SUCCESS;
8246 }

References moab::Interface::add_entities(), moab::Interface::create_meshset(), moab::Interface::delete_entities(), ErrorCode, get_partitioning(), globalPartCount, moab::Range::index(), moab::Range::insert(), MB_SUCCESS, mbImpl, MESHSET_SET, part_tag(), partition_sets(), proc_config(), moab::ProcConfig::proc_rank(), and moab::Interface::tag_set_data().

◆ define_mpe()

void moab::ParallelComm::define_mpe ( )
private

Definition at line 4228 of file ParallelComm.cpp.

4229 {
4230 #ifdef MOAB_HAVE_MPE
4231  if( myDebug->get_verbosity() == 2 )
4232  {
4233  // Define mpe states used for logging
4234  int success;
4235  MPE_Log_get_state_eventIDs( &IFACE_START, &IFACE_END );
4236  MPE_Log_get_state_eventIDs( &GHOST_START, &GHOST_END );
4237  MPE_Log_get_state_eventIDs( &SHAREDV_START, &SHAREDV_END );
4238  MPE_Log_get_state_eventIDs( &RESOLVE_START, &RESOLVE_END );
4239  MPE_Log_get_state_eventIDs( &ENTITIES_START, &ENTITIES_END );
4240  MPE_Log_get_state_eventIDs( &RHANDLES_START, &RHANDLES_END );
4241  MPE_Log_get_state_eventIDs( &OWNED_START, &OWNED_END );
4242  success = MPE_Describe_state( IFACE_START, IFACE_END, "Resolve interface ents", "green" );
4243  assert( MPE_LOG_OK == success );
4244  success = MPE_Describe_state( GHOST_START, GHOST_END, "Exchange ghost ents", "red" );
4245  assert( MPE_LOG_OK == success );
4246  success = MPE_Describe_state( SHAREDV_START, SHAREDV_END, "Resolve interface vertices", "blue" );
4247  assert( MPE_LOG_OK == success );
4248  success = MPE_Describe_state( RESOLVE_START, RESOLVE_END, "Resolve shared ents", "purple" );
4249  assert( MPE_LOG_OK == success );
4250  success = MPE_Describe_state( ENTITIES_START, ENTITIES_END, "Exchange shared ents", "yellow" );
4251  assert( MPE_LOG_OK == success );
4252  success = MPE_Describe_state( RHANDLES_START, RHANDLES_END, "Remote handles", "cyan" );
4253  assert( MPE_LOG_OK == success );
4254  success = MPE_Describe_state( OWNED_START, OWNED_END, "Exchange owned ents", "black" );
4255  assert( MPE_LOG_OK == success );
4256  }
4257 #endif
4258 }

References moab::DebugOutput::get_verbosity(), MPE_Describe_state, MPE_LOG_OK, and myDebug.

Referenced by resolve_shared_ents().

◆ delete_all_buffers()

void moab::ParallelComm::delete_all_buffers ( )
inlineprivate

reset message buffers to their initial state

delete all buffers, freeing up any memory held by them

Definition at line 1557 of file ParallelComm.hpp.

1558 {
1559  std::vector< Buffer* >::iterator vit;
1560  for( vit = localOwnedBuffs.begin(); vit != localOwnedBuffs.end(); ++vit )
1561  delete( *vit );
1562  localOwnedBuffs.clear();
1563 
1564  for( vit = remoteOwnedBuffs.begin(); vit != remoteOwnedBuffs.end(); ++vit )
1565  delete( *vit );
1566  remoteOwnedBuffs.clear();
1567 }

References localOwnedBuffs, and remoteOwnedBuffs.

Referenced by ~ParallelComm().

◆ delete_entities()

ErrorCode moab::ParallelComm::delete_entities ( Range to_delete)

Definition at line 9258 of file ParallelComm.cpp.

9259 {
9260  // Will not look at shared sets yet, but maybe we should
9261  // First, see if any of the entities to delete is shared; then inform the other processors
9262  // about their fate (to be deleted), using a crystal router transfer
9263  ErrorCode rval = MB_SUCCESS;
9264  unsigned char pstat;
9265  EntityHandle tmp_handles[MAX_SHARING_PROCS];
9266  int tmp_procs[MAX_SHARING_PROCS];
9267  unsigned int num_ps;
9268  TupleList ents_to_delete;
9269  ents_to_delete.initialize( 1, 0, 1, 0, to_delete.size() * ( MAX_SHARING_PROCS + 1 ) ); // A little bit of overkill
9270  ents_to_delete.enableWriteAccess();
9271  unsigned int i = 0;
9272  for( Range::iterator it = to_delete.begin(); it != to_delete.end(); ++it )
9273  {
9274  EntityHandle eh = *it; // Entity to be deleted
9275 
9276  rval = get_sharing_data( eh, tmp_procs, tmp_handles, pstat, num_ps );
9277  if( rval != MB_SUCCESS || num_ps == 0 ) continue;
9278  // Add to the tuple list the information to be sent (to the remote procs)
9279  for( unsigned int p = 0; p < num_ps; p++ )
9280  {
9281  ents_to_delete.vi_wr[i] = tmp_procs[p];
9282  ents_to_delete.vul_wr[i] = (unsigned long)tmp_handles[p];
9283  i++;
9284  ents_to_delete.inc_n();
9285  }
9286  }
9287 
9288  gs_data::crystal_data* cd = this->procConfig.crystal_router();
9289  // All communication happens here; no other mpi calls
9290  // Also, this is a collective call
9291  rval = cd->gs_transfer( 1, ents_to_delete, 0 );MB_CHK_SET_ERR( rval, "Error in tuple transfer" );
9292 
9293  // Add to the range of ents to delete the new ones that were sent from other procs
9294  unsigned int received = ents_to_delete.get_n();
9295  for( i = 0; i < received; i++ )
9296  {
9297  // int from = ents_to_delete.vi_rd[i];
9298  unsigned long valrec = ents_to_delete.vul_rd[i];
9299  to_delete.insert( (EntityHandle)valrec );
9300  }
9301  rval = mbImpl->delete_entities( to_delete );MB_CHK_SET_ERR( rval, "Error in deleting actual entities" );
9302 
9303  std::set< EntityHandle > good_ents;
9304  for( std::set< EntityHandle >::iterator sst = sharedEnts.begin(); sst != sharedEnts.end(); sst++ )
9305  {
9306  EntityHandle eh = *sst;
9307  int index = to_delete.index( eh );
9308  if( -1 == index ) good_ents.insert( eh );
9309  }
9310  sharedEnts = good_ents;
9311 
9312  // What about shared sets? Who is updating them?
9313  return MB_SUCCESS;
9314 }

References moab::Range::begin(), moab::ProcConfig::crystal_router(), moab::Interface::delete_entities(), moab::TupleList::enableWriteAccess(), moab::Range::end(), ErrorCode, moab::TupleList::get_n(), get_sharing_data(), moab::TupleList::inc_n(), moab::Range::index(), moab::TupleList::initialize(), moab::Range::insert(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, procConfig, sharedEnts, moab::Range::size(), moab::TupleList::vi_wr, moab::TupleList::vul_rd, and moab::TupleList::vul_wr.

Referenced by moab::NCHelperScrip::create_mesh().

◆ destroy_part()

ErrorCode moab::ParallelComm::destroy_part ( EntityHandle  part)

Definition at line 8248 of file ParallelComm.cpp.

8249 {
8250  // Mark as invalid so we know that it needs to be updated
8251  globalPartCount = -1;
8252 
8253  ErrorCode rval;
8254  if( get_partitioning() )
8255  {
8256  rval = mbImpl->remove_entities( get_partitioning(), &part_id, 1 );
8257  if( MB_SUCCESS != rval ) return rval;
8258  }
8259 
8260  moab::Range& pSets = this->partition_sets();
8261  if( pSets.index( part_id ) >= 0 )
8262  {
8263  pSets.erase( part_id );
8264  }
8265  return mbImpl->delete_entities( &part_id, 1 );
8266 }

References moab::Interface::delete_entities(), moab::Range::erase(), ErrorCode, get_partitioning(), globalPartCount, moab::Range::index(), MB_SUCCESS, mbImpl, partition_sets(), and moab::Interface::remove_entities().

◆ estimate_ents_buffer_size()

int moab::ParallelComm::estimate_ents_buffer_size ( Range entities,
const bool  store_remote_handles 
)
private

estimate size required to pack entities

Definition at line 1503 of file ParallelComm.cpp.

1504 {
1505  int buff_size = 0;
1506  std::vector< EntityHandle > dum_connect_vec;
1507  const EntityHandle* connect;
1508  int num_connect;
1509 
1510  int num_verts = entities.num_of_type( MBVERTEX );
1511  // # verts + coords + handles
1512  buff_size += 2 * sizeof( int ) + 3 * sizeof( double ) * num_verts;
1513  if( store_remote_handles ) buff_size += sizeof( EntityHandle ) * num_verts;
1514 
1515  // Do a rough count by looking at first entity of each type
1516  for( EntityType t = MBEDGE; t < MBENTITYSET; t++ )
1517  {
1518  const Range::iterator rit = entities.lower_bound( t );
1519  if( TYPE_FROM_HANDLE( *rit ) != t ) continue;
1520 
1521  ErrorCode result = mbImpl->get_connectivity( *rit, connect, num_connect, false, &dum_connect_vec );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get connectivity to estimate buffer size", -1 );
1522 
1523  // Number, type, nodes per entity
1524  buff_size += 3 * sizeof( int );
1525  int num_ents = entities.num_of_type( t );
1526  // Connectivity, handle for each ent
1527  buff_size += ( num_connect + 1 ) * sizeof( EntityHandle ) * num_ents;
1528  }
1529 
1530  // Extra entity type at end, passed as int
1531  buff_size += sizeof( int );
1532 
1533  return buff_size;
1534 }

References entities, ErrorCode, moab::Interface::get_connectivity(), MB_CHK_SET_ERR_RET_VAL, MBEDGE, MBENTITYSET, mbImpl, MBVERTEX, t, and moab::TYPE_FROM_HANDLE().

Referenced by pack_entities().

◆ estimate_sets_buffer_size()

int moab::ParallelComm::estimate_sets_buffer_size ( Range entities,
const bool  store_remote_handles 
)
private

estimate size required to pack sets

Definition at line 1536 of file ParallelComm.cpp.

1537 {
1538  // Number of sets
1539  int buff_size = sizeof( int );
1540 
1541  // Do a rough count by looking at first entity of each type
1542  Range::iterator rit = entities.lower_bound( MBENTITYSET );
1543  ErrorCode result;
1544 
1545  for( ; rit != entities.end(); ++rit )
1546  {
1547  unsigned int options;
1548  result = mbImpl->get_meshset_options( *rit, options );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get meshset options", -1 );
1549 
1550  buff_size += sizeof( int );
1551 
1552  Range set_range;
1553  if( options & MESHSET_SET )
1554  {
1555  // Range-based set; count the subranges
1556  result = mbImpl->get_entities_by_handle( *rit, set_range );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get set entities", -1 );
1557 
1558  // Set range
1559  buff_size += RANGE_SIZE( set_range );
1560  }
1561  else if( options & MESHSET_ORDERED )
1562  {
1563  // Just get the number of entities in the set
1564  int num_ents;
1565  result = mbImpl->get_number_entities_by_handle( *rit, num_ents );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get number entities in ordered set", -1 );
1566 
1567  // Set vec
1568  buff_size += sizeof( EntityHandle ) * num_ents + sizeof( int );
1569  }
1570 
1571  // Get numbers of parents/children
1572  int num_par, num_ch;
1573  result = mbImpl->num_child_meshsets( *rit, &num_ch );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get num children", -1 );
1574  result = mbImpl->num_parent_meshsets( *rit, &num_par );MB_CHK_SET_ERR_RET_VAL( result, "Failed to get num parents", -1 );
1575 
1576  buff_size += ( num_ch + num_par ) * sizeof( EntityHandle ) + 2 * sizeof( int );
1577  }
1578 
1579  return buff_size;
1580 }

References entities, ErrorCode, moab::Interface::get_entities_by_handle(), moab::Interface::get_meshset_options(), moab::Interface::get_number_entities_by_handle(), MB_CHK_SET_ERR_RET_VAL, MBENTITYSET, mbImpl, MESHSET_SET, moab::Interface::num_child_meshsets(), moab::Interface::num_parent_meshsets(), and moab::RANGE_SIZE().

Referenced by pack_sets().

◆ exchange_all_shared_handles()

ErrorCode moab::ParallelComm::exchange_all_shared_handles ( std::vector< std::vector< SharedEntityData > > &  send_data,
std::vector< std::vector< SharedEntityData > > &  result 
)
private

Every processor sends shared entity handle data to every other processor that it shares entities with. Passed back map is all received data, indexed by processor ID. This function is intended to be used for debugging.

Definition at line 8474 of file ParallelComm.cpp.

8476 {
8477  int ierr;
8478  const int tag = 0;
8479  const MPI_Comm cm = procConfig.proc_comm();
8480  const int num_proc = buffProcs.size();
8481  const std::vector< int > procs( buffProcs.begin(), buffProcs.end() );
8482  std::vector< MPI_Request > recv_req( buffProcs.size(), MPI_REQUEST_NULL );
8483  std::vector< MPI_Request > send_req( buffProcs.size(), MPI_REQUEST_NULL );
8484 
8485  // Set up to receive sizes
8486  std::vector< int > sizes_send( num_proc ), sizes_recv( num_proc );
8487  for( int i = 0; i < num_proc; i++ )
8488  {
8489  ierr = MPI_Irecv( &sizes_recv[i], 1, MPI_INT, procs[i], tag, cm, &recv_req[i] );
8490  if( ierr ) return MB_FILE_WRITE_ERROR;
8491  }
8492 
8493  // Send sizes
8494  assert( num_proc == (int)send_data.size() );
8495 
8496  result.resize( num_proc );
8497  for( int i = 0; i < num_proc; i++ )
8498  {
8499  sizes_send[i] = send_data[i].size();
8500  ierr = MPI_Isend( &sizes_send[i], 1, MPI_INT, buffProcs[i], tag, cm, &send_req[i] );
8501  if( ierr ) return MB_FILE_WRITE_ERROR;
8502  }
8503 
8504  // Receive sizes
8505  std::vector< MPI_Status > stat( num_proc );
8506  ierr = MPI_Waitall( num_proc, &recv_req[0], &stat[0] );
8507  if( ierr ) return MB_FILE_WRITE_ERROR;
8508 
8509  // Wait until all sizes are sent (clean up pending req's)
8510  ierr = MPI_Waitall( num_proc, &send_req[0], &stat[0] );
8511  if( ierr ) return MB_FILE_WRITE_ERROR;
8512 
8513  // Set up to receive data
8514  for( int i = 0; i < num_proc; i++ )
8515  {
8516  result[i].resize( sizes_recv[i] );
8517  ierr = MPI_Irecv( (void*)( &( result[i][0] ) ), sizeof( SharedEntityData ) * sizes_recv[i], MPI_UNSIGNED_CHAR,
8518  buffProcs[i], tag, cm, &recv_req[i] );
8519  if( ierr ) return MB_FILE_WRITE_ERROR;
8520  }
8521 
8522  // Send data
8523  for( int i = 0; i < num_proc; i++ )
8524  {
8525  ierr = MPI_Isend( (void*)( &( send_data[i][0] ) ), sizeof( SharedEntityData ) * sizes_send[i],
8526  MPI_UNSIGNED_CHAR, buffProcs[i], tag, cm, &send_req[i] );
8527  if( ierr ) return MB_FILE_WRITE_ERROR;
8528  }
8529 
8530  // Receive data
8531  ierr = MPI_Waitall( num_proc, &recv_req[0], &stat[0] );
8532  if( ierr ) return MB_FILE_WRITE_ERROR;
8533 
8534  // Wait until everything is sent to release send buffers
8535  ierr = MPI_Waitall( num_proc, &send_req[0], &stat[0] );
8536  if( ierr ) return MB_FILE_WRITE_ERROR;
8537 
8538  return MB_SUCCESS;
8539 }

References buffProcs, MB_FILE_WRITE_ERROR, MB_SUCCESS, moab::ProcConfig::proc_comm(), and procConfig.

Referenced by check_all_shared_handles(), and correct_thin_ghost_layers().

◆ exchange_ghost_cells() [1/2]

ErrorCode moab::ParallelComm::exchange_ghost_cells ( int  ghost_dim,
int  bridge_dim,
int  num_layers,
int  addl_ents,
bool  store_remote_handles,
bool  wait_all = true,
EntityHandle file_set = NULL 
)

Exchange ghost cells with neighboring procs Neighboring processors are those sharing an interface with this processor. All entities of dimension ghost_dim within num_layers of interface, measured going through bridge_dim, are exchanged. See MeshTopoUtil::get_bridge_adjacencies for description of bridge adjacencies. If wait_all is false and store_remote_handles is true, MPI_Request objects are available in the sendReqs[2*MAX_SHARING_PROCS] member array, with inactive requests marked as MPI_REQUEST_NULL. If store_remote_handles or wait_all is false, this function returns after all entities have been received and processed.

Parameters
ghost_dimDimension of ghost entities to be exchanged
bridge_dimDimension of entities used to measure layers from interface
num_layersNumber of layers of ghosts requested
addl_entsDimension of additional adjacent entities to exchange with ghosts, 0 if none
store_remote_handlesIf true, send message with new entity handles to source processor
wait_allIf true, function does not return until all send buffers are cleared.

Definition at line 5687 of file ParallelComm.cpp.

5694 {
5695 #ifdef MOAB_HAVE_MPE
5696  if( myDebug->get_verbosity() == 2 )
5697  {
5698  if( !num_layers )
5699  MPE_Log_event( IFACE_START, procConfig.proc_rank(), "Starting interface exchange." );
5700  else
5701  MPE_Log_event( GHOST_START, procConfig.proc_rank(), "Starting ghost exchange." );
5702  }
5703 #endif
5704 
5705  myDebug->tprintf( 1, "Entering exchange_ghost_cells with num_layers = %d\n", num_layers );
5706  if( myDebug->get_verbosity() == 4 )
5707  {
5708  msgs.clear();
5709  msgs.reserve( MAX_SHARING_PROCS );
5710  }
5711 
5712  // If we're only finding out about existing ents, we have to be storing
5713  // remote handles too
5714  assert( num_layers > 0 || store_remote_handles );
5715 
5716  const bool is_iface = !num_layers;
5717 
5718  // Get the b-dimensional interface(s) with with_proc, where b = bridge_dim
5719 
5720  int success;
5721  ErrorCode result = MB_SUCCESS;
5722  int incoming1 = 0, incoming2 = 0;
5723 
5725 
5726  // When this function is called, buffProcs should already have any
5727  // communicating procs
5728 
5729  //===========================================
5730  // Post ghost irecv's for ghost entities from all communicating procs
5731  //===========================================
5732 #ifdef MOAB_HAVE_MPE
5733  if( myDebug->get_verbosity() == 2 )
5734  {
5735  MPE_Log_event( ENTITIES_START, procConfig.proc_rank(), "Starting entity exchange." );
5736  }
5737 #endif
5738 
5739  // Index reqs the same as buffer/sharing procs indices
5740  std::vector< MPI_Request > recv_ent_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL ),
5741  recv_remoteh_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL );
5742  std::vector< unsigned int >::iterator proc_it;
5743  int ind, p;
5744  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
5745  for( ind = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, ind++ )
5746  {
5747  incoming1++;
5749  MB_MESG_ENTS_SIZE, incoming1 );
5750  success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, buffProcs[ind],
5751  MB_MESG_ENTS_SIZE, procConfig.proc_comm(), &recv_ent_reqs[3 * ind] );
5752  if( success != MPI_SUCCESS )
5753  {
5754  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in ghost exchange" );
5755  }
5756  }
5757 
5758  //===========================================
5759  // Get entities to be sent to neighbors
5760  //===========================================
5761  Range sent_ents[MAX_SHARING_PROCS], allsent, tmp_range;
5762  TupleList entprocs;
5763  int dum_ack_buff;
5764  result = get_sent_ents( is_iface, bridge_dim, ghost_dim, num_layers, addl_ents, sent_ents, allsent, entprocs );MB_CHK_SET_ERR( result, "get_sent_ents failed" );
5765 
5766  // augment file set with the entities to be sent
5767  // we might have created new entities if addl_ents>0, edges and/or faces
5768  if( addl_ents > 0 && file_set && !allsent.empty() )
5769  {
5770  result = mbImpl->add_entities( *file_set, allsent );MB_CHK_SET_ERR( result, "Failed to add new sub-entities to set" );
5771  }
5772  myDebug->tprintf( 1, "allsent ents compactness (size) = %f (%lu)\n", allsent.compactness(),
5773  (unsigned long)allsent.size() );
5774 
5775  //===========================================
5776  // Pack and send ents from this proc to others
5777  //===========================================
5778  for( p = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, p++ )
5779  {
5780  myDebug->tprintf( 1, "Sent ents compactness (size) = %f (%lu)\n", sent_ents[p].compactness(),
5781  (unsigned long)sent_ents[p].size() );
5782 
5783  // Reserve space on front for size and for initial buff size
5784  localOwnedBuffs[p]->reset_buffer( sizeof( int ) );
5785 
5786  // Entities
5787  result = pack_entities( sent_ents[p], localOwnedBuffs[p], store_remote_handles, buffProcs[p], is_iface,
5788  &entprocs, &allsent );MB_CHK_SET_ERR( result, "Packing entities failed" );
5789 
5790  if( myDebug->get_verbosity() == 4 )
5791  {
5792  msgs.resize( msgs.size() + 1 );
5793  msgs.back() = new Buffer( *localOwnedBuffs[p] );
5794  }
5795 
5796  // Send the buffer (size stored in front in send_buffer)
5797  result = send_buffer( *proc_it, localOwnedBuffs[p], MB_MESG_ENTS_SIZE, sendReqs[3 * p],
5798  recv_ent_reqs[3 * p + 2], &dum_ack_buff, incoming1, MB_MESG_REMOTEH_SIZE,
5799  ( !is_iface && store_remote_handles ? // this used for ghosting only
5800  localOwnedBuffs[p]
5801  : NULL ),
5802  &recv_remoteh_reqs[3 * p], &incoming2 );MB_CHK_SET_ERR( result, "Failed to Isend in ghost exchange" );
5803  }
5804 
5805  entprocs.reset();
5806 
5807  //===========================================
5808  // Receive/unpack new entities
5809  //===========================================
5810  // Number of incoming messages for ghosts is the number of procs we
5811  // communicate with; for iface, it's the number of those with lower rank
5812  MPI_Status status;
5813  std::vector< std::vector< EntityHandle > > recd_ents( buffProcs.size() );
5814  std::vector< std::vector< EntityHandle > > L1hloc( buffProcs.size() ), L1hrem( buffProcs.size() );
5815  std::vector< std::vector< int > > L1p( buffProcs.size() );
5816  std::vector< EntityHandle > L2hloc, L2hrem;
5817  std::vector< unsigned int > L2p;
5818  std::vector< EntityHandle > new_ents;
5819 
5820  while( incoming1 )
5821  {
5822  // Wait for all recvs of ghost ents before proceeding to sending remote handles,
5823  // b/c some procs may have sent to a 3rd proc ents owned by me;
5825 
5826  success = MPI_Waitany( 3 * buffProcs.size(), &recv_ent_reqs[0], &ind, &status );
5827  if( MPI_SUCCESS != success )
5828  {
5829  MB_SET_ERR( MB_FAILURE, "Failed in waitany in ghost exchange" );
5830  }
5831 
5832  PRINT_DEBUG_RECD( status );
5833 
5834  // OK, received something; decrement incoming counter
5835  incoming1--;
5836  bool done = false;
5837 
5838  // In case ind is for ack, we need index of one before it
5839  unsigned int base_ind = 3 * ( ind / 3 );
5840  result = recv_buffer( MB_MESG_ENTS_SIZE, status, remoteOwnedBuffs[ind / 3], recv_ent_reqs[base_ind + 1],
5841  recv_ent_reqs[base_ind + 2], incoming1, localOwnedBuffs[ind / 3], sendReqs[base_ind + 1],
5842  sendReqs[base_ind + 2], done,
5843  ( !is_iface && store_remote_handles ? localOwnedBuffs[ind / 3] : NULL ),
5844  MB_MESG_REMOTEH_SIZE, // maybe base_ind+1?
5845  &recv_remoteh_reqs[base_ind + 1], &incoming2 );MB_CHK_SET_ERR( result, "Failed to receive buffer" );
5846 
5847  if( done )
5848  {
5849  if( myDebug->get_verbosity() == 4 )
5850  {
5851  msgs.resize( msgs.size() + 1 );
5852  msgs.back() = new Buffer( *remoteOwnedBuffs[ind / 3] );
5853  }
5854 
5855  // Message completely received - process buffer that was sent
5856  remoteOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) );
5857  result = unpack_entities( remoteOwnedBuffs[ind / 3]->buff_ptr, store_remote_handles, ind / 3, is_iface,
5858  L1hloc, L1hrem, L1p, L2hloc, L2hrem, L2p, new_ents );
5859  if( MB_SUCCESS != result )
5860  {
5861  std::cout << "Failed to unpack entities. Buffer contents:" << std::endl;
5862  print_buffer( remoteOwnedBuffs[ind / 3]->mem_ptr, MB_MESG_ENTS_SIZE, buffProcs[ind / 3], false );
5863  return result;
5864  }
5865 
5866  if( recv_ent_reqs.size() != 3 * buffProcs.size() )
5867  {
5868  // Post irecv's for remote handles from new proc; shouldn't be iface,
5869  // since we know about all procs we share with
5870  assert( !is_iface );
5871  recv_remoteh_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
5872  for( unsigned int i = recv_ent_reqs.size(); i < 3 * buffProcs.size(); i += 3 )
5873  {
5874  localOwnedBuffs[i / 3]->reset_buffer();
5875  incoming2++;
5876  PRINT_DEBUG_IRECV( procConfig.proc_rank(), buffProcs[i / 3], localOwnedBuffs[i / 3]->mem_ptr,
5878  success = MPI_Irecv( localOwnedBuffs[i / 3]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR,
5880  &recv_remoteh_reqs[i] );
5881  if( success != MPI_SUCCESS )
5882  {
5883  MB_SET_ERR( MB_FAILURE, "Failed to post irecv for remote handles in ghost exchange" );
5884  }
5885  }
5886  recv_ent_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
5887  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
5888  }
5889  }
5890  }
5891 
5892  // Add requests for any new addl procs
5893  if( recv_ent_reqs.size() != 3 * buffProcs.size() )
5894  {
5895  // Shouldn't get here...
5896  MB_SET_ERR( MB_FAILURE, "Requests length doesn't match proc count in ghost exchange" );
5897  }
5898 
5899 #ifdef MOAB_HAVE_MPE
5900  if( myDebug->get_verbosity() == 2 )
5901  {
5902  MPE_Log_event( ENTITIES_END, procConfig.proc_rank(), "Ending entity exchange." );
5903  }
5904 #endif
5905 
5906  if( is_iface )
5907  {
5908  // Need to check over entities I sent and make sure I received
5909  // handles for them from all expected procs; if not, need to clean
5910  // them up
5911  result = check_clean_iface( allsent );
5912  if( MB_SUCCESS != result ) std::cout << "Failed check." << std::endl;
5913 
5914  // Now set the shared/interface tag on non-vertex entities on interface
5915  result = tag_iface_entities();MB_CHK_SET_ERR( result, "Failed to tag iface entities" );
5916 
5917 #ifndef NDEBUG
5918  result = check_sent_ents( allsent );
5919  if( MB_SUCCESS != result ) std::cout << "Failed check." << std::endl;
5920  result = check_all_shared_handles( true );
5921  if( MB_SUCCESS != result ) std::cout << "Failed check." << std::endl;
5922 #endif
5923 
5924 #ifdef MOAB_HAVE_MPE
5925  if( myDebug->get_verbosity() == 2 )
5926  {
5927  MPE_Log_event( IFACE_END, procConfig.proc_rank(), "Ending interface exchange." );
5928  }
5929 #endif
5930 
5931  //===========================================
5932  // Wait if requested
5933  //===========================================
5934  if( wait_all )
5935  {
5936  if( myDebug->get_verbosity() == 5 )
5937  {
5938  success = MPI_Barrier( procConfig.proc_comm() );
5939  }
5940  else
5941  {
5942  MPI_Status mult_status[3 * MAX_SHARING_PROCS];
5943  success = MPI_Waitall( 3 * buffProcs.size(), &recv_ent_reqs[0], mult_status );
5944  if( MPI_SUCCESS != success )
5945  {
5946  MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" );
5947  }
5948  success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status );
5949  if( MPI_SUCCESS != success )
5950  {
5951  MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" );
5952  }
5953  /*success = MPI_Waitall(3*buffProcs.size(), &recv_remoteh_reqs[0], mult_status);
5954  if (MPI_SUCCESS != success) {
5955  MB_SET_ERR(MB_FAILURE, "Failed in waitall in ghost exchange");
5956  }*/
5957  }
5958  }
5959 
5960  myDebug->tprintf( 1, "Total number of shared entities = %lu.\n", (unsigned long)sharedEnts.size() );
5961  myDebug->tprintf( 1, "Exiting exchange_ghost_cells for is_iface==true \n" );
5962 
5963  return MB_SUCCESS;
5964  }
5965 
5966  // we still need to wait on sendReqs, if they are not fulfilled yet
5967  if( wait_all )
5968  {
5969  if( myDebug->get_verbosity() == 5 )
5970  {
5971  success = MPI_Barrier( procConfig.proc_comm() );
5972  }
5973  else
5974  {
5975  MPI_Status mult_status[3 * MAX_SHARING_PROCS];
5976  success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status );
5977  if( MPI_SUCCESS != success )
5978  {
5979  MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" );
5980  }
5981  }
5982  }
5983  //===========================================
5984  // Send local handles for new ghosts to owner, then add
5985  // those to ghost list for that owner
5986  //===========================================
5987  for( p = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, p++ )
5988  {
5989 
5990  // Reserve space on front for size and for initial buff size
5991  remoteOwnedBuffs[p]->reset_buffer( sizeof( int ) );
5992 
5993  result = pack_remote_handles( L1hloc[p], L1hrem[p], L1p[p], *proc_it, remoteOwnedBuffs[p] );MB_CHK_SET_ERR( result, "Failed to pack remote handles" );
5994  remoteOwnedBuffs[p]->set_stored_size();
5995 
5996  if( myDebug->get_verbosity() == 4 )
5997  {
5998  msgs.resize( msgs.size() + 1 );
5999  msgs.back() = new Buffer( *remoteOwnedBuffs[p] );
6000  }
6002  recv_remoteh_reqs[3 * p + 2], &dum_ack_buff, incoming2 );MB_CHK_SET_ERR( result, "Failed to send remote handles" );
6003  }
6004 
6005  //===========================================
6006  // Process remote handles of my ghosteds
6007  //===========================================
6008  while( incoming2 )
6009  {
6011  success = MPI_Waitany( 3 * buffProcs.size(), &recv_remoteh_reqs[0], &ind, &status );
6012  if( MPI_SUCCESS != success )
6013  {
6014  MB_SET_ERR( MB_FAILURE, "Failed in waitany in ghost exchange" );
6015  }
6016 
6017  // OK, received something; decrement incoming counter
6018  incoming2--;
6019 
6020  PRINT_DEBUG_RECD( status );
6021 
6022  bool done = false;
6023  unsigned int base_ind = 3 * ( ind / 3 );
6024  result = recv_buffer( MB_MESG_REMOTEH_SIZE, status, localOwnedBuffs[ind / 3], recv_remoteh_reqs[base_ind + 1],
6025  recv_remoteh_reqs[base_ind + 2], incoming2, remoteOwnedBuffs[ind / 3],
6026  sendReqs[base_ind + 1], sendReqs[base_ind + 2], done );MB_CHK_SET_ERR( result, "Failed to receive remote handles" );
6027  if( done )
6028  {
6029  // Incoming remote handles
6030  if( myDebug->get_verbosity() == 4 )
6031  {
6032  msgs.resize( msgs.size() + 1 );
6033  msgs.back() = new Buffer( *localOwnedBuffs[ind / 3] );
6034  }
6035  localOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) );
6036  result =
6037  unpack_remote_handles( buffProcs[ind / 3], localOwnedBuffs[ind / 3]->buff_ptr, L2hloc, L2hrem, L2p );MB_CHK_SET_ERR( result, "Failed to unpack remote handles" );
6038  }
6039  }
6040 
6041 #ifdef MOAB_HAVE_MPE
6042  if( myDebug->get_verbosity() == 2 )
6043  {
6044  MPE_Log_event( RHANDLES_END, procConfig.proc_rank(), "Ending remote handles." );
6045  MPE_Log_event( GHOST_END, procConfig.proc_rank(), "Ending ghost exchange (still doing checks)." );
6046  }
6047 #endif
6048 
6049  //===========================================
6050  // Wait if requested
6051  //===========================================
6052  if( wait_all )
6053  {
6054  if( myDebug->get_verbosity() == 5 )
6055  {
6056  success = MPI_Barrier( procConfig.proc_comm() );
6057  }
6058  else
6059  {
6060  MPI_Status mult_status[3 * MAX_SHARING_PROCS];
6061  success = MPI_Waitall( 3 * buffProcs.size(), &recv_remoteh_reqs[0], mult_status );
6062  if( MPI_SUCCESS == success ) success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status );
6063  }
6064  if( MPI_SUCCESS != success )
6065  {
6066  MB_SET_ERR( MB_FAILURE, "Failed in waitall in ghost exchange" );
6067  }
6068  }
6069 
6070 #ifndef NDEBUG
6071  result = check_sent_ents( allsent );MB_CHK_SET_ERR( result, "Failed check on shared entities" );
6072  result = check_all_shared_handles( true );MB_CHK_SET_ERR( result, "Failed check on all shared handles" );
6073 #endif
6074 
6075  if( file_set && !new_ents.empty() )
6076  {
6077  result = mbImpl->add_entities( *file_set, &new_ents[0], new_ents.size() );MB_CHK_SET_ERR( result, "Failed to add new entities to set" );
6078  }
6079 
6080  myDebug->tprintf( 1, "Total number of shared entities = %lu.\n", (unsigned long)sharedEnts.size() );
6081  myDebug->tprintf( 1, "Exiting exchange_ghost_cells for is_iface==false \n" );
6082 
6083  return MB_SUCCESS;
6084 }

References moab::Interface::add_entities(), buffProcs, check_all_shared_handles(), check_clean_iface(), check_sent_ents(), moab::Range::compactness(), moab::Range::empty(), ErrorCode, get_sent_ents(), moab::DebugOutput::get_verbosity(), INITIAL_BUFF_SIZE, localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_ENTS_SIZE, moab::MB_MESG_REMOTEH_SIZE, MB_SET_ERR, MB_SUCCESS, mbImpl, MPE_Log_event, moab::msgs, myDebug, pack_entities(), pack_remote_handles(), print_buffer(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, recv_buffer(), remoteOwnedBuffs, moab::TupleList::reset(), reset_all_buffers(), send_buffer(), sendReqs, sharedEnts, moab::Range::size(), size(), tag_iface_entities(), moab::DebugOutput::tprintf(), unpack_entities(), and unpack_remote_handles().

Referenced by moab::NestedRefine::exchange_ghosts(), moab::ReadParallel::load_file(), main(), resolve_shared_ents(), and moab::ParallelMergeMesh::TagSharedElements().

◆ exchange_ghost_cells() [2/2]

ErrorCode moab::ParallelComm::exchange_ghost_cells ( ParallelComm **  pc,
unsigned int  num_procs,
int  ghost_dim,
int  bridge_dim,
int  num_layers,
int  addl_ents,
bool  store_remote_handles,
EntityHandle file_sets = NULL 
)
static

Static version of exchange_ghost_cells, exchanging info through buffers rather than messages.

Definition at line 6588 of file ParallelComm.cpp.

6596 {
6597  // Static version of function, exchanging info through buffers rather
6598  // than through messages
6599 
6600  // If we're only finding out about existing ents, we have to be storing
6601  // remote handles too
6602  assert( num_layers > 0 || store_remote_handles );
6603 
6604  const bool is_iface = !num_layers;
6605 
6606  unsigned int ind;
6607  ParallelComm* pc;
6608  ErrorCode result = MB_SUCCESS;
6609 
6610  std::vector< Error* > ehs( num_procs );
6611  for( unsigned int i = 0; i < num_procs; i++ )
6612  {
6613  result = pcs[i]->get_moab()->query_interface( ehs[i] );
6614  assert( MB_SUCCESS == result );
6615  }
6616 
6617  // When this function is called, buffProcs should already have any
6618  // communicating procs
6619 
6620  //===========================================
6621  // Get entities to be sent to neighbors
6622  //===========================================
6623 
6624  // Done in a separate loop over procs because sometimes later procs
6625  // need to add info to earlier procs' messages
6626  Range sent_ents[MAX_SHARING_PROCS][MAX_SHARING_PROCS], allsent[MAX_SHARING_PROCS];
6627 
6628  //===========================================
6629  // Get entities to be sent to neighbors
6630  //===========================================
6631  TupleList entprocs[MAX_SHARING_PROCS];
6632  for( unsigned int p = 0; p < num_procs; p++ )
6633  {
6634  pc = pcs[p];
6635  result = pc->get_sent_ents( is_iface, bridge_dim, ghost_dim, num_layers, addl_ents, sent_ents[p], allsent[p],
6636  entprocs[p] );MB_CHK_SET_ERR( result, "p = " << p << ", get_sent_ents failed" );
6637 
6638  //===========================================
6639  // Pack entities into buffers
6640  //===========================================
6641  for( ind = 0; ind < pc->buffProcs.size(); ind++ )
6642  {
6643  // Entities
6644  pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
6645  result = pc->pack_entities( sent_ents[p][ind], pc->localOwnedBuffs[ind], store_remote_handles,
6646  pc->buffProcs[ind], is_iface, &entprocs[p], &allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", packing entities failed" );
6647  }
6648 
6649  entprocs[p].reset();
6650  }
6651 
6652  //===========================================
6653  // Receive/unpack new entities
6654  //===========================================
6655  // Number of incoming messages for ghosts is the number of procs we
6656  // communicate with; for iface, it's the number of those with lower rank
6657  std::vector< std::vector< EntityHandle > > L1hloc[MAX_SHARING_PROCS], L1hrem[MAX_SHARING_PROCS];
6658  std::vector< std::vector< int > > L1p[MAX_SHARING_PROCS];
6659  std::vector< EntityHandle > L2hloc[MAX_SHARING_PROCS], L2hrem[MAX_SHARING_PROCS];
6660  std::vector< unsigned int > L2p[MAX_SHARING_PROCS];
6661  std::vector< EntityHandle > new_ents[MAX_SHARING_PROCS];
6662 
6663  for( unsigned int p = 0; p < num_procs; p++ )
6664  {
6665  L1hloc[p].resize( pcs[p]->buffProcs.size() );
6666  L1hrem[p].resize( pcs[p]->buffProcs.size() );
6667  L1p[p].resize( pcs[p]->buffProcs.size() );
6668  }
6669 
6670  for( unsigned int p = 0; p < num_procs; p++ )
6671  {
6672  pc = pcs[p];
6673 
6674  for( ind = 0; ind < pc->buffProcs.size(); ind++ )
6675  {
6676  // Incoming ghost entities; unpack; returns entities received
6677  // both from sending proc and from owning proc (which may be different)
6678 
6679  // Buffer could be empty, which means there isn't any message to
6680  // unpack (due to this comm proc getting added as a result of indirect
6681  // communication); just skip this unpack
6682  if( pc->localOwnedBuffs[ind]->get_stored_size() == 0 ) continue;
6683 
6684  unsigned int to_p = pc->buffProcs[ind];
6685  pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
6686  result = pcs[to_p]->unpack_entities( pc->localOwnedBuffs[ind]->buff_ptr, store_remote_handles, ind,
6687  is_iface, L1hloc[to_p], L1hrem[to_p], L1p[to_p], L2hloc[to_p],
6688  L2hrem[to_p], L2p[to_p], new_ents[to_p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to unpack entities" );
6689  }
6690  }
6691 
6692  if( is_iface )
6693  {
6694  // Need to check over entities I sent and make sure I received
6695  // handles for them from all expected procs; if not, need to clean
6696  // them up
6697  for( unsigned int p = 0; p < num_procs; p++ )
6698  {
6699  result = pcs[p]->check_clean_iface( allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to check on shared entities" );
6700  }
6701 
6702 #ifndef NDEBUG
6703  for( unsigned int p = 0; p < num_procs; p++ )
6704  {
6705  result = pcs[p]->check_sent_ents( allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to check on shared entities" );
6706  }
6707  result = check_all_shared_handles( pcs, num_procs );MB_CHK_SET_ERR( result, "Failed to check on all shared handles" );
6708 #endif
6709  return MB_SUCCESS;
6710  }
6711 
6712  //===========================================
6713  // Send local handles for new ghosts to owner, then add
6714  // those to ghost list for that owner
6715  //===========================================
6716  std::vector< unsigned int >::iterator proc_it;
6717  for( unsigned int p = 0; p < num_procs; p++ )
6718  {
6719  pc = pcs[p];
6720 
6721  for( ind = 0, proc_it = pc->buffProcs.begin(); proc_it != pc->buffProcs.end(); ++proc_it, ind++ )
6722  {
6723  // Skip if iface layer and higher-rank proc
6724  pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
6725  result = pc->pack_remote_handles( L1hloc[p][ind], L1hrem[p][ind], L1p[p][ind], *proc_it,
6726  pc->localOwnedBuffs[ind] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to pack remote handles" );
6727  }
6728  }
6729 
6730  //===========================================
6731  // Process remote handles of my ghosteds
6732  //===========================================
6733  for( unsigned int p = 0; p < num_procs; p++ )
6734  {
6735  pc = pcs[p];
6736 
6737  for( ind = 0, proc_it = pc->buffProcs.begin(); proc_it != pc->buffProcs.end(); ++proc_it, ind++ )
6738  {
6739  // Incoming remote handles
6740  unsigned int to_p = pc->buffProcs[ind];
6741  pc->localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
6742  result = pcs[to_p]->unpack_remote_handles( p, pc->localOwnedBuffs[ind]->buff_ptr, L2hloc[to_p],
6743  L2hrem[to_p], L2p[to_p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to unpack remote handles" );
6744  }
6745  }
6746 
6747 #ifndef NDEBUG
6748  for( unsigned int p = 0; p < num_procs; p++ )
6749  {
6750  result = pcs[p]->check_sent_ents( allsent[p] );MB_CHK_SET_ERR( result, "p = " << p << ", failed to check on shared entities" );
6751  }
6752 
6753  result = ParallelComm::check_all_shared_handles( pcs, num_procs );MB_CHK_SET_ERR( result, "Failed to check on all shared handles" );
6754 #endif
6755 
6756  if( file_sets )
6757  {
6758  for( unsigned int p = 0; p < num_procs; p++ )
6759  {
6760  if( new_ents[p].empty() ) continue;
6761  result = pcs[p]->get_moab()->add_entities( file_sets[p], &new_ents[p][0], new_ents[p].size() );MB_CHK_SET_ERR( result, "p = " << p << ", failed to add new entities to set" );
6762  }
6763  }
6764 
6765  return MB_SUCCESS;
6766 }

References moab::Interface::add_entities(), buffProcs, check_all_shared_handles(), check_clean_iface(), check_sent_ents(), ErrorCode, get_moab(), get_sent_ents(), localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, pack_entities(), pack_remote_handles(), moab::Interface::query_interface(), moab::TupleList::reset(), size(), unpack_entities(), and unpack_remote_handles().

◆ exchange_owned_mesh()

ErrorCode moab::ParallelComm::exchange_owned_mesh ( std::vector< unsigned int > &  exchange_procs,
std::vector< Range * > &  exchange_ents,
std::vector< MPI_Request > &  recv_ent_reqs,
std::vector< MPI_Request > &  recv_remoteh_reqs,
const bool  recv_posted,
bool  store_remote_handles,
bool  wait_all,
bool  migrate = false 
)

Exchange owned mesh for input mesh entities and sets This function is called twice by exchange_owned_meshs to exchange entities before sets.

Parameters
migrateif the owner if entities are changed or not

Definition at line 6912 of file ParallelComm.cpp.

6920 {
6921 #ifdef MOAB_HAVE_MPE
6922  if( myDebug->get_verbosity() == 2 )
6923  {
6924  MPE_Log_event( OWNED_START, procConfig.proc_rank(), "Starting owned ents exchange." );
6925  }
6926 #endif
6927 
6928  myDebug->tprintf( 1, "Entering exchange_owned_mesh\n" );
6929  if( myDebug->get_verbosity() == 4 )
6930  {
6931  msgs.clear();
6932  msgs.reserve( MAX_SHARING_PROCS );
6933  }
6934  unsigned int i;
6935  int ind, success;
6936  ErrorCode result = MB_SUCCESS;
6937  int incoming1 = 0, incoming2 = 0;
6938 
6939  // Set buffProcs with communicating procs
6940  unsigned int n_proc = exchange_procs.size();
6941  for( i = 0; i < n_proc; i++ )
6942  {
6943  ind = get_buffers( exchange_procs[i] );
6944  result = add_verts( *exchange_ents[i] );MB_CHK_SET_ERR( result, "Failed to add verts" );
6945 
6946  // Filter out entities already shared with destination
6947  Range tmp_range;
6948  result = filter_pstatus( *exchange_ents[i], PSTATUS_SHARED, PSTATUS_AND, buffProcs[ind], &tmp_range );MB_CHK_SET_ERR( result, "Failed to filter on owner" );
6949  if( !tmp_range.empty() )
6950  {
6951  *exchange_ents[i] = subtract( *exchange_ents[i], tmp_range );
6952  }
6953  }
6954 
6955  //===========================================
6956  // Post ghost irecv's for entities from all communicating procs
6957  //===========================================
6958 #ifdef MOAB_HAVE_MPE
6959  if( myDebug->get_verbosity() == 2 )
6960  {
6961  MPE_Log_event( ENTITIES_START, procConfig.proc_rank(), "Starting entity exchange." );
6962  }
6963 #endif
6964 
6965  // Index reqs the same as buffer/sharing procs indices
6966  if( !recv_posted )
6967  {
6969  recv_ent_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
6970  recv_remoteh_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
6971  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
6972 
6973  for( i = 0; i < n_proc; i++ )
6974  {
6975  ind = get_buffers( exchange_procs[i] );
6976  incoming1++;
6978  INITIAL_BUFF_SIZE, MB_MESG_ENTS_SIZE, incoming1 );
6979  success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, buffProcs[ind],
6980  MB_MESG_ENTS_SIZE, procConfig.proc_comm(), &recv_ent_reqs[3 * ind] );
6981  if( success != MPI_SUCCESS )
6982  {
6983  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in owned entity exchange" );
6984  }
6985  }
6986  }
6987  else
6988  incoming1 += n_proc;
6989 
6990  //===========================================
6991  // Get entities to be sent to neighbors
6992  // Need to get procs each entity is sent to
6993  //===========================================
6994  Range allsent, tmp_range;
6995  int dum_ack_buff;
6996  int npairs = 0;
6997  TupleList entprocs;
6998  for( i = 0; i < n_proc; i++ )
6999  {
7000  int n_ents = exchange_ents[i]->size();
7001  if( n_ents > 0 )
7002  {
7003  npairs += n_ents; // Get the total # of proc/handle pairs
7004  allsent.merge( *exchange_ents[i] );
7005  }
7006  }
7007 
7008  // Allocate a TupleList of that size
7009  entprocs.initialize( 1, 0, 1, 0, npairs );
7010  entprocs.enableWriteAccess();
7011 
7012  // Put the proc/handle pairs in the list
7013  for( i = 0; i < n_proc; i++ )
7014  {
7015  for( Range::iterator rit = exchange_ents[i]->begin(); rit != exchange_ents[i]->end(); ++rit )
7016  {
7017  entprocs.vi_wr[entprocs.get_n()] = exchange_procs[i];
7018  entprocs.vul_wr[entprocs.get_n()] = *rit;
7019  entprocs.inc_n();
7020  }
7021  }
7022 
7023  // Sort by handle
7024  moab::TupleList::buffer sort_buffer;
7025  sort_buffer.buffer_init( npairs );
7026  entprocs.sort( 1, &sort_buffer );
7027  sort_buffer.reset();
7028 
7029  myDebug->tprintf( 1, "allsent ents compactness (size) = %f (%lu)\n", allsent.compactness(),
7030  (unsigned long)allsent.size() );
7031 
7032  //===========================================
7033  // Pack and send ents from this proc to others
7034  //===========================================
7035  for( i = 0; i < n_proc; i++ )
7036  {
7037  ind = get_buffers( exchange_procs[i] );
7038  myDebug->tprintf( 1, "Sent ents compactness (size) = %f (%lu)\n", exchange_ents[i]->compactness(),
7039  (unsigned long)exchange_ents[i]->size() );
7040  // Reserve space on front for size and for initial buff size
7041  localOwnedBuffs[ind]->reset_buffer( sizeof( int ) );
7042  result = pack_buffer( *exchange_ents[i], false, true, store_remote_handles, buffProcs[ind],
7043  localOwnedBuffs[ind], &entprocs, &allsent );
7044 
7045  if( myDebug->get_verbosity() == 4 )
7046  {
7047  msgs.resize( msgs.size() + 1 );
7048  msgs.back() = new Buffer( *localOwnedBuffs[ind] );
7049  }
7050 
7051  // Send the buffer (size stored in front in send_buffer)
7052  result = send_buffer( exchange_procs[i], localOwnedBuffs[ind], MB_MESG_ENTS_SIZE, sendReqs[3 * ind],
7053  recv_ent_reqs[3 * ind + 2], &dum_ack_buff, incoming1, MB_MESG_REMOTEH_SIZE,
7054  ( store_remote_handles ? localOwnedBuffs[ind] : NULL ), &recv_remoteh_reqs[3 * ind],
7055  &incoming2 );MB_CHK_SET_ERR( result, "Failed to Isend in ghost exchange" );
7056  }
7057 
7058  entprocs.reset();
7059 
7060  //===========================================
7061  // Receive/unpack new entities
7062  //===========================================
7063  // Number of incoming messages is the number of procs we communicate with
7064  MPI_Status status;
7065  std::vector< std::vector< EntityHandle > > recd_ents( buffProcs.size() );
7066  std::vector< std::vector< EntityHandle > > L1hloc( buffProcs.size() ), L1hrem( buffProcs.size() );
7067  std::vector< std::vector< int > > L1p( buffProcs.size() );
7068  std::vector< EntityHandle > L2hloc, L2hrem;
7069  std::vector< unsigned int > L2p;
7070  std::vector< EntityHandle > new_ents;
7071 
7072  while( incoming1 )
7073  {
7074  // Wait for all recvs of ents before proceeding to sending remote handles,
7075  // b/c some procs may have sent to a 3rd proc ents owned by me;
7077 
7078  success = MPI_Waitany( 3 * buffProcs.size(), &recv_ent_reqs[0], &ind, &status );
7079  if( MPI_SUCCESS != success )
7080  {
7081  MB_SET_ERR( MB_FAILURE, "Failed in waitany in owned entity exchange" );
7082  }
7083 
7084  PRINT_DEBUG_RECD( status );
7085 
7086  // OK, received something; decrement incoming counter
7087  incoming1--;
7088  bool done = false;
7089 
7090  // In case ind is for ack, we need index of one before it
7091  unsigned int base_ind = 3 * ( ind / 3 );
7092  result = recv_buffer( MB_MESG_ENTS_SIZE, status, remoteOwnedBuffs[ind / 3], recv_ent_reqs[base_ind + 1],
7093  recv_ent_reqs[base_ind + 2], incoming1, localOwnedBuffs[ind / 3], sendReqs[base_ind + 1],
7094  sendReqs[base_ind + 2], done, ( store_remote_handles ? localOwnedBuffs[ind / 3] : NULL ),
7095  MB_MESG_REMOTEH_SIZE, &recv_remoteh_reqs[base_ind + 1], &incoming2 );MB_CHK_SET_ERR( result, "Failed to receive buffer" );
7096 
7097  if( done )
7098  {
7099  if( myDebug->get_verbosity() == 4 )
7100  {
7101  msgs.resize( msgs.size() + 1 );
7102  msgs.back() = new Buffer( *remoteOwnedBuffs[ind / 3] );
7103  }
7104 
7105  // Message completely received - process buffer that was sent
7106  remoteOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) );
7107  result = unpack_buffer( remoteOwnedBuffs[ind / 3]->buff_ptr, store_remote_handles, buffProcs[ind / 3],
7108  ind / 3, L1hloc, L1hrem, L1p, L2hloc, L2hrem, L2p, new_ents, true );
7109  if( MB_SUCCESS != result )
7110  {
7111  std::cout << "Failed to unpack entities. Buffer contents:" << std::endl;
7112  print_buffer( remoteOwnedBuffs[ind / 3]->mem_ptr, MB_MESG_ENTS_SIZE, buffProcs[ind / 3], false );
7113  return result;
7114  }
7115 
7116  if( recv_ent_reqs.size() != 3 * buffProcs.size() )
7117  {
7118  // Post irecv's for remote handles from new proc
7119  recv_remoteh_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7120  for( i = recv_ent_reqs.size(); i < 3 * buffProcs.size(); i += 3 )
7121  {
7122  localOwnedBuffs[i / 3]->reset_buffer();
7123  incoming2++;
7124  PRINT_DEBUG_IRECV( procConfig.proc_rank(), buffProcs[i / 3], localOwnedBuffs[i / 3]->mem_ptr,
7126  success = MPI_Irecv( localOwnedBuffs[i / 3]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR,
7128  &recv_remoteh_reqs[i] );
7129  if( success != MPI_SUCCESS )
7130  {
7131  MB_SET_ERR( MB_FAILURE, "Failed to post irecv for remote handles in ghost exchange" );
7132  }
7133  }
7134  recv_ent_reqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7135  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7136  }
7137  }
7138  }
7139 
7140  // Assign and remove newly created elements from/to receive processor
7141  result = assign_entities_part( new_ents, procConfig.proc_rank() );MB_CHK_SET_ERR( result, "Failed to assign entities to part" );
7142  if( migrate )
7143  {
7144  result = remove_entities_part( allsent, procConfig.proc_rank() );MB_CHK_SET_ERR( result, "Failed to remove entities to part" );
7145  }
7146 
7147  // Add requests for any new addl procs
7148  if( recv_ent_reqs.size() != 3 * buffProcs.size() )
7149  {
7150  // Shouldn't get here...
7151  MB_SET_ERR( MB_FAILURE, "Requests length doesn't match proc count in entity exchange" );
7152  }
7153 
7154 #ifdef MOAB_HAVE_MPE
7155  if( myDebug->get_verbosity() == 2 )
7156  {
7157  MPE_Log_event( ENTITIES_END, procConfig.proc_rank(), "Ending entity exchange." );
7158  }
7159 #endif
7160 
7161  // we still need to wait on sendReqs, if they are not fulfilled yet
7162  if( wait_all )
7163  {
7164  if( myDebug->get_verbosity() == 5 )
7165  {
7166  success = MPI_Barrier( procConfig.proc_comm() );
7167  }
7168  else
7169  {
7170  MPI_Status mult_status[3 * MAX_SHARING_PROCS];
7171  success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status );
7172  if( MPI_SUCCESS != success )
7173  {
7174  MB_SET_ERR( MB_FAILURE, "Failed in waitall in exchange owned mesh" );
7175  }
7176  }
7177  }
7178 
7179  //===========================================
7180  // Send local handles for new entity to owner
7181  //===========================================
7182  for( i = 0; i < n_proc; i++ )
7183  {
7184  ind = get_buffers( exchange_procs[i] );
7185  // Reserve space on front for size and for initial buff size
7186  remoteOwnedBuffs[ind]->reset_buffer( sizeof( int ) );
7187 
7188  result = pack_remote_handles( L1hloc[ind], L1hrem[ind], L1p[ind], buffProcs[ind], remoteOwnedBuffs[ind] );MB_CHK_SET_ERR( result, "Failed to pack remote handles" );
7189  remoteOwnedBuffs[ind]->set_stored_size();
7190 
7191  if( myDebug->get_verbosity() == 4 )
7192  {
7193  msgs.resize( msgs.size() + 1 );
7194  msgs.back() = new Buffer( *remoteOwnedBuffs[ind] );
7195  }
7196  result = send_buffer( buffProcs[ind], remoteOwnedBuffs[ind], MB_MESG_REMOTEH_SIZE, sendReqs[3 * ind],
7197  recv_remoteh_reqs[3 * ind + 2], &dum_ack_buff, incoming2 );MB_CHK_SET_ERR( result, "Failed to send remote handles" );
7198  }
7199 
7200  //===========================================
7201  // Process remote handles of my ghosteds
7202  //===========================================
7203  while( incoming2 )
7204  {
7206  success = MPI_Waitany( 3 * buffProcs.size(), &recv_remoteh_reqs[0], &ind, &status );
7207  if( MPI_SUCCESS != success )
7208  {
7209  MB_SET_ERR( MB_FAILURE, "Failed in waitany in owned entity exchange" );
7210  }
7211 
7212  // OK, received something; decrement incoming counter
7213  incoming2--;
7214 
7215  PRINT_DEBUG_RECD( status );
7216 
7217  bool done = false;
7218  unsigned int base_ind = 3 * ( ind / 3 );
7219  result = recv_buffer( MB_MESG_REMOTEH_SIZE, status, localOwnedBuffs[ind / 3], recv_remoteh_reqs[base_ind + 1],
7220  recv_remoteh_reqs[base_ind + 2], incoming2, remoteOwnedBuffs[ind / 3],
7221  sendReqs[base_ind + 1], sendReqs[base_ind + 2], done );MB_CHK_SET_ERR( result, "Failed to receive remote handles" );
7222 
7223  if( done )
7224  {
7225  // Incoming remote handles
7226  if( myDebug->get_verbosity() == 4 )
7227  {
7228  msgs.resize( msgs.size() + 1 );
7229  msgs.back() = new Buffer( *localOwnedBuffs[ind / 3] );
7230  }
7231 
7232  localOwnedBuffs[ind / 3]->reset_ptr( sizeof( int ) );
7233  result =
7234  unpack_remote_handles( buffProcs[ind / 3], localOwnedBuffs[ind / 3]->buff_ptr, L2hloc, L2hrem, L2p );MB_CHK_SET_ERR( result, "Failed to unpack remote handles" );
7235  }
7236  }
7237 
7238 #ifdef MOAB_HAVE_MPE
7239  if( myDebug->get_verbosity() == 2 )
7240  {
7241  MPE_Log_event( RHANDLES_END, procConfig.proc_rank(), "Ending remote handles." );
7242  MPE_Log_event( OWNED_END, procConfig.proc_rank(), "Ending ghost exchange (still doing checks)." );
7243  }
7244 #endif
7245 
7246  //===========================================
7247  // Wait if requested
7248  //===========================================
7249  if( wait_all )
7250  {
7251  if( myDebug->get_verbosity() == 5 )
7252  {
7253  success = MPI_Barrier( procConfig.proc_comm() );
7254  }
7255  else
7256  {
7257  MPI_Status mult_status[3 * MAX_SHARING_PROCS];
7258  success = MPI_Waitall( 3 * buffProcs.size(), &recv_remoteh_reqs[0], mult_status );
7259  if( MPI_SUCCESS == success ) success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], mult_status );
7260  }
7261  if( MPI_SUCCESS != success )
7262  {
7263  MB_SET_ERR( MB_FAILURE, "Failed in waitall in owned entity exchange" );
7264  }
7265  }
7266 
7267 #ifndef NDEBUG
7268  result = check_sent_ents( allsent );MB_CHK_SET_ERR( result, "Failed check on shared entities" );
7269 #endif
7270  myDebug->tprintf( 1, "Exiting exchange_owned_mesh\n" );
7271 
7272  return MB_SUCCESS;
7273 }

References add_verts(), assign_entities_part(), buffProcs, check_sent_ents(), moab::Range::compactness(), moab::Range::empty(), moab::TupleList::enableWriteAccess(), ErrorCode, filter_pstatus(), get_buffers(), moab::TupleList::get_n(), moab::DebugOutput::get_verbosity(), moab::TupleList::inc_n(), INITIAL_BUFF_SIZE, moab::TupleList::initialize(), localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_ENTS_SIZE, moab::MB_MESG_REMOTEH_SIZE, MB_SET_ERR, MB_SUCCESS, moab::Range::merge(), MPE_Log_event, moab::msgs, myDebug, pack_buffer(), pack_remote_handles(), print_buffer(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_AND, PSTATUS_SHARED, recv_buffer(), remoteOwnedBuffs, remove_entities_part(), moab::TupleList::buffer::reset(), moab::TupleList::reset(), reset_all_buffers(), send_buffer(), sendReqs, moab::Range::size(), size(), moab::TupleList::sort(), moab::subtract(), moab::DebugOutput::tprintf(), unpack_buffer(), unpack_remote_handles(), moab::TupleList::vi_wr, and moab::TupleList::vul_wr.

Referenced by exchange_owned_meshs().

◆ exchange_owned_meshs()

ErrorCode moab::ParallelComm::exchange_owned_meshs ( std::vector< unsigned int > &  exchange_procs,
std::vector< Range * > &  exchange_ents,
std::vector< MPI_Request > &  recv_ent_reqs,
std::vector< MPI_Request > &  recv_remoteh_reqs,
bool  store_remote_handles,
bool  wait_all = true,
bool  migrate = false,
int  dim = 0 
)

Exchange owned mesh for input mesh entities and sets This function should be called collectively over the communicator for this ParallelComm. If this version is called, all shared exchanged entities should have a value for this tag (or the tag should have a default value).

Parameters
exchange_procsprocessor vector exchanged
exchange_entsexchanged entities for each processors
migrateif the owner if entities are changed or not

Definition at line 6842 of file ParallelComm.cpp.

6850 {
6851  // Filter out entities already shared with destination
6852  // Exchange twice for entities and sets
6853  ErrorCode result;
6854  std::vector< unsigned int > exchange_procs_sets;
6855  std::vector< Range* > exchange_sets;
6856  int n_proc = exchange_procs.size();
6857  for( int i = 0; i < n_proc; i++ )
6858  {
6859  Range set_range = exchange_ents[i]->subset_by_type( MBENTITYSET );
6860  *exchange_ents[i] = subtract( *exchange_ents[i], set_range );
6861  Range* tmp_range = new Range( set_range );
6862  exchange_sets.push_back( tmp_range );
6863  exchange_procs_sets.push_back( exchange_procs[i] );
6864  }
6865 
6866  if( dim == 2 )
6867  {
6868  // Exchange entities first
6869  result = exchange_owned_mesh( exchange_procs, exchange_ents, recvReqs, recvRemotehReqs, true,
6870  store_remote_handles, wait_all, migrate );MB_CHK_SET_ERR( result, "Failed to exchange owned mesh entities" );
6871 
6872  // Exchange sets
6873  result = exchange_owned_mesh( exchange_procs_sets, exchange_sets, recvReqs, recvRemotehReqs, false,
6874  store_remote_handles, wait_all, migrate );
6875  }
6876  else
6877  {
6878  // Exchange entities first
6879  result = exchange_owned_mesh( exchange_procs, exchange_ents, recv_ent_reqs, recv_remoteh_reqs, false,
6880  store_remote_handles, wait_all, migrate );MB_CHK_SET_ERR( result, "Failed to exchange owned mesh entities" );
6881 
6882  // Exchange sets
6883  result = exchange_owned_mesh( exchange_procs_sets, exchange_sets, recv_ent_reqs, recv_remoteh_reqs, false,
6884  store_remote_handles, wait_all, migrate );MB_CHK_SET_ERR( result, "Failed to exchange owned mesh sets" );
6885  }
6886 
6887  for( int i = 0; i < n_proc; i++ )
6888  delete exchange_sets[i];
6889 
6890  // Build up the list of shared entities
6891  std::map< std::vector< int >, std::vector< EntityHandle > > proc_nvecs;
6892  int procs[MAX_SHARING_PROCS];
6894  int nprocs;
6895  unsigned char pstat;
6896  for( std::set< EntityHandle >::iterator vit = sharedEnts.begin(); vit != sharedEnts.end(); ++vit )
6897  {
6898  if( mbImpl->dimension_from_handle( *vit ) > 2 ) continue;
6899  result = get_sharing_data( *vit, procs, handles, pstat, nprocs );MB_CHK_SET_ERR( result, "Failed to get sharing data in exchange_owned_meshs" );
6900  std::sort( procs, procs + nprocs );
6901  std::vector< int > tmp_procs( procs, procs + nprocs );
6902  assert( tmp_procs.size() != 2 );
6903  proc_nvecs[tmp_procs].push_back( *vit );
6904  }
6905 
6906  // Create interface sets from shared entities
6907  result = create_interface_sets( proc_nvecs );MB_CHK_SET_ERR( result, "Failed to create interface sets" );
6908 
6909  return MB_SUCCESS;
6910 }

References create_interface_sets(), dim, moab::Interface::dimension_from_handle(), ErrorCode, exchange_owned_mesh(), get_sharing_data(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, MBENTITYSET, mbImpl, recvRemotehReqs, recvReqs, sharedEnts, moab::Range::subset_by_type(), and moab::subtract().

◆ exchange_tags() [1/3]

ErrorCode moab::ParallelComm::exchange_tags ( const char *  tag_name,
const Range entities 
)
inline

Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective.

Parameters
tag_nameName of tag to be exchanged
entitiesEntities for which tags are exchanged

Definition at line 1589 of file ParallelComm.hpp.

1590 {
1591  // get the tag handle
1592  std::vector< Tag > tags( 1 );
1593  ErrorCode result = mbImpl->tag_get_handle( tag_name, 0, MB_TYPE_OPAQUE, tags[0], MB_TAG_ANY );
1594  if( MB_SUCCESS != result )
1595  return result;
1596  else if( !tags[0] )
1597  return MB_TAG_NOT_FOUND;
1598 
1599  return exchange_tags( tags, tags, entities );
1600 }

References entities, ErrorCode, exchange_tags(), MB_SUCCESS, MB_TAG_ANY, MB_TAG_NOT_FOUND, MB_TYPE_OPAQUE, mbImpl, and moab::Interface::tag_get_handle().

◆ exchange_tags() [2/3]

ErrorCode moab::ParallelComm::exchange_tags ( const std::vector< Tag > &  src_tags,
const std::vector< Tag > &  dst_tags,
const Range entities 
)

Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If this version is called, all ghosted/shared entities should have a value for this tag (or the tag should have a default value). If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective.

Parameters
src_tagsVector of tag handles to be exchanged
dst_tagsTag handles to store the tags on the non-owning procs
entitiesEntities for which tags are exchanged

Definition at line 7526 of file ParallelComm.cpp.

7529 {
7530  ErrorCode result;
7531  int success;
7532 
7533  myDebug->tprintf( 1, "Entering exchange_tags\n" );
7534 
7535  // Get all procs interfacing to this proc
7536  std::set< unsigned int > exch_procs;
7537  result = get_comm_procs( exch_procs );
7538 
7539  // Post ghost irecv's for all interface procs
7540  // Index requests the same as buffer/sharing procs indices
7541  std::vector< MPI_Request > recv_tag_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7542  // sent_ack_reqs(buffProcs.size(), MPI_REQUEST_NULL);
7543  std::vector< unsigned int >::iterator sit;
7544  int ind;
7545 
7547  int incoming = 0;
7548 
7549  for( ind = 0, sit = buffProcs.begin(); sit != buffProcs.end(); ++sit, ind++ )
7550  {
7551  incoming++;
7553  MB_MESG_TAGS_SIZE, incoming );
7554 
7555  success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, *sit,
7556  MB_MESG_TAGS_SIZE, procConfig.proc_comm(), &recv_tag_reqs[3 * ind] );
7557  if( success != MPI_SUCCESS )
7558  {
7559  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in ghost exchange" );
7560  }
7561  }
7562 
7563  // Pack and send tags from this proc to others
7564  // Make sendReqs vector to simplify initialization
7565  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7566 
7567  // Take all shared entities if incoming list is empty
7568  Range entities;
7569  if( entities_in.empty() )
7570  std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( entities ) );
7571  else
7572  entities = entities_in;
7573 
7574  int dum_ack_buff;
7575 
7576  for( ind = 0, sit = buffProcs.begin(); sit != buffProcs.end(); ++sit, ind++ )
7577  {
7578  Range tag_ents = entities;
7579 
7580  // Get ents shared by proc *sit
7581  result = filter_pstatus( tag_ents, PSTATUS_SHARED, PSTATUS_AND, *sit );MB_CHK_SET_ERR( result, "Failed pstatus AND check" );
7582 
7583  // Remote nonowned entities
7584  if( !tag_ents.empty() )
7585  {
7586  result = filter_pstatus( tag_ents, PSTATUS_NOT_OWNED, PSTATUS_NOT );MB_CHK_SET_ERR( result, "Failed pstatus NOT check" );
7587  }
7588 
7589  // Pack-send; this also posts receives if store_remote_handles is true
7590  std::vector< Range > tag_ranges;
7591  for( std::vector< Tag >::const_iterator vit = src_tags.begin(); vit != src_tags.end(); ++vit )
7592  {
7593  const void* ptr;
7594  int sz;
7595  if( mbImpl->tag_get_default_value( *vit, ptr, sz ) != MB_SUCCESS )
7596  {
7597  Range tagged_ents;
7598  mbImpl->get_entities_by_type_and_tag( 0, MBMAXTYPE, &*vit, 0, 1, tagged_ents );
7599  tag_ranges.push_back( intersect( tag_ents, tagged_ents ) );
7600  }
7601  else
7602  {
7603  tag_ranges.push_back( tag_ents );
7604  }
7605  }
7606 
7607  // Pack the data
7608  // Reserve space on front for size and for initial buff size
7609  localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
7610 
7611  result = pack_tags( tag_ents, src_tags, dst_tags, tag_ranges, localOwnedBuffs[ind], true, *sit );MB_CHK_SET_ERR( result, "Failed to count buffer in pack_send_tag" );
7612 
7613  // Now send it
7614  result = send_buffer( *sit, localOwnedBuffs[ind], MB_MESG_TAGS_SIZE, sendReqs[3 * ind],
7615  recv_tag_reqs[3 * ind + 2], &dum_ack_buff, incoming );MB_CHK_SET_ERR( result, "Failed to send buffer" );
7616  }
7617 
7618  // Receive/unpack tags
7619  while( incoming )
7620  {
7621  MPI_Status status;
7622  int index_in_recv_requests;
7624  success = MPI_Waitany( 3 * buffProcs.size(), &recv_tag_reqs[0], &index_in_recv_requests, &status );
7625  if( MPI_SUCCESS != success )
7626  {
7627  MB_SET_ERR( MB_FAILURE, "Failed in waitany in tag exchange" );
7628  }
7629  // Processor index in the list is divided by 3
7630  ind = index_in_recv_requests / 3;
7631 
7632  PRINT_DEBUG_RECD( status );
7633 
7634  // OK, received something; decrement incoming counter
7635  incoming--;
7636 
7637  bool done = false;
7638  std::vector< EntityHandle > dum_vec;
7639  result = recv_buffer( MB_MESG_TAGS_SIZE, status, remoteOwnedBuffs[ind],
7640  recv_tag_reqs[3 * ind + 1], // This is for receiving the second message
7641  recv_tag_reqs[3 * ind + 2], // This would be for ack, but it is not
7642  // used; consider removing it
7643  incoming, localOwnedBuffs[ind],
7644  sendReqs[3 * ind + 1], // Send request for sending the second message
7645  sendReqs[3 * ind + 2], // This is for sending the ack
7646  done );MB_CHK_SET_ERR( result, "Failed to resize recv buffer" );
7647  if( done )
7648  {
7649  remoteOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
7650  result = unpack_tags( remoteOwnedBuffs[ind]->buff_ptr, dum_vec, true, buffProcs[ind] );MB_CHK_SET_ERR( result, "Failed to recv-unpack-tag message" );
7651  }
7652  }
7653 
7654  // OK, now wait
7655  if( myDebug->get_verbosity() == 5 )
7656  {
7657  success = MPI_Barrier( procConfig.proc_comm() );
7658  }
7659  else
7660  {
7661  MPI_Status status[3 * MAX_SHARING_PROCS];
7662  success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], status );
7663  }
7664  if( MPI_SUCCESS != success )
7665  {
7666  MB_SET_ERR( MB_FAILURE, "Failure in waitall in tag exchange" );
7667  }
7668 
7669  // If source tag is not equal to destination tag, then
7670  // do local copy for owned entities (communicate w/ self)
7671  assert( src_tags.size() == dst_tags.size() );
7672  if( src_tags != dst_tags )
7673  {
7674  std::vector< unsigned char > data;
7675  Range owned_ents;
7676  if( entities_in.empty() )
7677  std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( entities ) );
7678  else
7679  owned_ents = entities_in;
7680  result = filter_pstatus( owned_ents, PSTATUS_NOT_OWNED, PSTATUS_NOT );MB_CHK_SET_ERR( result, "Failure to get subset of owned entities" );
7681 
7682  if( !owned_ents.empty() )
7683  { // Check this here, otherwise we get
7684  // Unexpected results from get_entities_by_type_and_tag w/ Interface::INTERSECT
7685  for( size_t i = 0; i < src_tags.size(); i++ )
7686  {
7687  if( src_tags[i] == dst_tags[i] ) continue;
7688 
7689  Range tagged_ents( owned_ents );
7690  result = mbImpl->get_entities_by_type_and_tag( 0, MBMAXTYPE, &src_tags[0], 0, 1, tagged_ents,
7691  Interface::INTERSECT );MB_CHK_SET_ERR( result, "get_entities_by_type_and_tag(type == MBMAXTYPE) failed" );
7692 
7693  int sz, size2;
7694  result = mbImpl->tag_get_bytes( src_tags[i], sz );MB_CHK_SET_ERR( result, "tag_get_size failed" );
7695  result = mbImpl->tag_get_bytes( dst_tags[i], size2 );MB_CHK_SET_ERR( result, "tag_get_size failed" );
7696  if( sz != size2 )
7697  {
7698  MB_SET_ERR( MB_FAILURE, "tag sizes don't match" );
7699  }
7700 
7701  data.resize( sz * tagged_ents.size() );
7702  result = mbImpl->tag_get_data( src_tags[i], tagged_ents, &data[0] );MB_CHK_SET_ERR( result, "tag_get_data failed" );
7703  result = mbImpl->tag_set_data( dst_tags[i], tagged_ents, &data[0] );MB_CHK_SET_ERR( result, "tag_set_data failed" );
7704  }
7705  }
7706  }
7707 
7708  myDebug->tprintf( 1, "Exiting exchange_tags" );
7709 
7710  return MB_SUCCESS;
7711 }

References buffProcs, moab::Range::empty(), entities, ErrorCode, filter_pstatus(), get_comm_procs(), moab::Interface::get_entities_by_type_and_tag(), moab::DebugOutput::get_verbosity(), INITIAL_BUFF_SIZE, moab::intersect(), moab::Interface::INTERSECT, localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_TAGS_SIZE, MB_SET_ERR, MB_SUCCESS, mbImpl, MBMAXTYPE, myDebug, pack_tags(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_AND, PSTATUS_NOT, PSTATUS_NOT_OWNED, PSTATUS_SHARED, recv_buffer(), remoteOwnedBuffs, reset_all_buffers(), send_buffer(), sendReqs, sharedEnts, moab::Range::size(), moab::Interface::tag_get_bytes(), moab::Interface::tag_get_data(), moab::Interface::tag_get_default_value(), moab::Interface::tag_set_data(), moab::DebugOutput::tprintf(), and unpack_tags().

Referenced by assign_global_ids(), moab::WriteHDF5Parallel::exchange_file_ids(), moab::NestedRefine::exchange_ghosts(), exchange_tags(), iMOAB_SynchronizeTags(), main(), perform_laplacian_smoothing(), perform_lloyd_relaxation(), and moab::LloydSmoother::perform_smooth().

◆ exchange_tags() [3/3]

ErrorCode moab::ParallelComm::exchange_tags ( Tag  tagh,
const Range entities 
)
inline

Exchange tags for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If the entities vector is empty, all shared entities participate in the exchange. If a proc has no owned entities this function must still be called since it is collective.

Parameters
taghHandle of tag to be exchanged
entitiesEntities for which tags are exchanged

Definition at line 1602 of file ParallelComm.hpp.

1603 {
1604  // get the tag handle
1605  std::vector< Tag > tags;
1606  tags.push_back( tagh );
1607 
1608  return exchange_tags( tags, tags, entities );
1609 }

References entities, and exchange_tags().

◆ filter_pstatus()

ErrorCode moab::ParallelComm::filter_pstatus ( Range ents,
const unsigned char  pstatus_val,
const unsigned char  op,
int  to_proc = -1,
Range returned_ents = NULL 
)

Filter the entities by pstatus tag. op is one of PSTATUS_ AND, OR, NOT; an entity is output if: AND: all bits set in pstatus_val are also set on entity OR: any bits set in pstatus_val also set on entity NOT: any bits set in pstatus_val are not set on entity

Results returned in input list, unless result_ents is passed in non-null, in which case results are returned in result_ents.

If ents is passed in empty, filter is done on shared entities in this pcomm instance, i.e. contents of sharedEnts.

Parameters
entsInput entities to filter
pstatus_valpstatus value to which entities are compared
opBitwise operation performed between pstatus values
to_procIf non-negative and PSTATUS_SHARED is set on pstatus_val, only entities shared with to_proc are returned
result_entsIf non-null, results of filter are put in the pointed-to range
Examples
LaplacianSmoother.cpp.

Definition at line 5577 of file ParallelComm.cpp.

5582 {
5583  Range tmp_ents;
5584 
5585  // assert(!ents.empty());
5586  if( ents.empty() )
5587  {
5588  if( returned_ents ) returned_ents->clear();
5589  return MB_SUCCESS;
5590  }
5591 
5592  // Put into tmp_ents any entities which are not owned locally or
5593  // who are already shared with to_proc
5594  std::vector< unsigned char > shared_flags( ents.size() ), shared_flags2;
5595  ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), ents, &shared_flags[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus flag" );
5596  Range::const_iterator rit, hint = tmp_ents.begin();
5597  ;
5598  int i;
5599  if( op == PSTATUS_OR )
5600  {
5601  for( rit = ents.begin(), i = 0; rit != ents.end(); ++rit, i++ )
5602  {
5603  if( ( ( shared_flags[i] & ~pstat ) ^ shared_flags[i] ) & pstat )
5604  {
5605  hint = tmp_ents.insert( hint, *rit );
5606  if( -1 != to_proc ) shared_flags2.push_back( shared_flags[i] );
5607  }
5608  }
5609  }
5610  else if( op == PSTATUS_AND )
5611  {
5612  for( rit = ents.begin(), i = 0; rit != ents.end(); ++rit, i++ )
5613  {
5614  if( ( shared_flags[i] & pstat ) == pstat )
5615  {
5616  hint = tmp_ents.insert( hint, *rit );
5617  if( -1 != to_proc ) shared_flags2.push_back( shared_flags[i] );
5618  }
5619  }
5620  }
5621  else if( op == PSTATUS_NOT )
5622  {
5623  for( rit = ents.begin(), i = 0; rit != ents.end(); ++rit, i++ )
5624  {
5625  if( !( shared_flags[i] & pstat ) )
5626  {
5627  hint = tmp_ents.insert( hint, *rit );
5628  if( -1 != to_proc ) shared_flags2.push_back( shared_flags[i] );
5629  }
5630  }
5631  }
5632  else
5633  {
5634  assert( false );
5635  return MB_FAILURE;
5636  }
5637 
5638  if( -1 != to_proc )
5639  {
5640  int sharing_procs[MAX_SHARING_PROCS];
5641  std::fill( sharing_procs, sharing_procs + MAX_SHARING_PROCS, -1 );
5642  Range tmp_ents2;
5643  hint = tmp_ents2.begin();
5644 
5645  for( rit = tmp_ents.begin(), i = 0; rit != tmp_ents.end(); ++rit, i++ )
5646  {
5647  // We need to check sharing procs
5648  if( shared_flags2[i] & PSTATUS_MULTISHARED )
5649  {
5650  result = mbImpl->tag_get_data( sharedps_tag(), &( *rit ), 1, sharing_procs );MB_CHK_SET_ERR( result, "Failed to get sharedps tag" );
5651  assert( -1 != sharing_procs[0] );
5652  for( unsigned int j = 0; j < MAX_SHARING_PROCS; j++ )
5653  {
5654  // If to_proc shares this entity, add it to list
5655  if( sharing_procs[j] == to_proc )
5656  {
5657  hint = tmp_ents2.insert( hint, *rit );
5658  }
5659  else if( -1 == sharing_procs[j] )
5660  break;
5661 
5662  sharing_procs[j] = -1;
5663  }
5664  }
5665  else if( shared_flags2[i] & PSTATUS_SHARED )
5666  {
5667  result = mbImpl->tag_get_data( sharedp_tag(), &( *rit ), 1, sharing_procs );MB_CHK_SET_ERR( result, "Failed to get sharedp tag" );
5668  assert( -1 != sharing_procs[0] );
5669  if( sharing_procs[0] == to_proc ) hint = tmp_ents2.insert( hint, *rit );
5670  sharing_procs[0] = -1;
5671  }
5672  else
5673  assert( "should never get here" && false );
5674  }
5675 
5676  tmp_ents.swap( tmp_ents2 );
5677  }
5678 
5679  if( returned_ents )
5680  returned_ents->swap( tmp_ents );
5681  else
5682  ents.swap( tmp_ents );
5683 
5684  return MB_SUCCESS;
5685 }

References moab::Range::begin(), moab::Range::clear(), moab::Range::empty(), moab::Range::end(), ErrorCode, moab::Range::insert(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, PSTATUS_AND, PSTATUS_MULTISHARED, PSTATUS_NOT, PSTATUS_OR, PSTATUS_SHARED, pstatus_tag(), sharedp_tag(), sharedps_tag(), moab::Range::size(), moab::Range::swap(), and moab::Interface::tag_get_data().

Referenced by moab::NCWriteGCRM::collect_mesh_info(), moab::ScdNCWriteHelper::collect_mesh_info(), moab::NCWriteHOMME::collect_mesh_info(), moab::NCWriteMPAS::collect_mesh_info(), create_fine_mesh(), moab::ScdNCHelper::create_quad_coordinate_tag(), moab::WriteHDF5Parallel::exchange_file_ids(), exchange_owned_mesh(), exchange_tags(), moab::WriteHDF5Parallel::gather_interface_meshes(), get_ghosted_entities(), get_max_volume(), get_sent_ents(), get_shared_entities(), hcFilter(), iMOAB_UpdateMeshInfo(), moab::HalfFacetRep::initialize(), moab::LloydSmoother::initialize(), moab::HiReconstruction::initialize(), laplacianFilter(), moab::ReadParallel::load_file(), main(), perform_laplacian_smoothing(), perform_lloyd_relaxation(), moab::LloydSmoother::perform_smooth(), moab::ScdNCHelper::read_scd_variables_to_nonset_allocate(), moab::NCHelperGCRM::read_ucd_variables_to_nonset_allocate(), moab::NCHelperMPAS::read_ucd_variables_to_nonset_allocate(), reduce_tags(), resolve_shared_sets(), send_entities(), and settle_intersection_points().

◆ find_existing_entity()

ErrorCode moab::ParallelComm::find_existing_entity ( const bool  is_iface,
const int  owner_p,
const EntityHandle  owner_h,
const int  num_ents,
const EntityHandle connect,
const int  num_connect,
const EntityType  this_type,
std::vector< EntityHandle > &  L2hloc,
std::vector< EntityHandle > &  L2hrem,
std::vector< unsigned int > &  L2p,
EntityHandle new_h 
)
private

given connectivity and type, find an existing entity, if there is one

Definition at line 3047 of file ParallelComm.cpp.

3058 {
3059  new_h = 0;
3060  if( !is_iface && num_ps > 2 )
3061  {
3062  for( unsigned int i = 0; i < L2hrem.size(); i++ )
3063  {
3064  if( L2hrem[i] == owner_h && owner_p == (int)L2p[i] )
3065  {
3066  new_h = L2hloc[i];
3067  return MB_SUCCESS;
3068  }
3069  }
3070  }
3071 
3072  // If we got here and it's a vertex, we don't need to look further
3073  if( MBVERTEX == this_type || !connect || !num_connect ) return MB_SUCCESS;
3074 
3075  Range tmp_range;
3076  ErrorCode result = mbImpl->get_adjacencies( connect, num_connect, CN::Dimension( this_type ), false, tmp_range );MB_CHK_SET_ERR( result, "Failed to get existing entity" );
3077  if( !tmp_range.empty() )
3078  {
3079  // Found a corresponding entity - return target
3080  new_h = *tmp_range.begin();
3081  }
3082  else
3083  {
3084  new_h = 0;
3085  }
3086 
3087  return MB_SUCCESS;
3088 }

References moab::Range::begin(), moab::CN::Dimension(), moab::Range::empty(), ErrorCode, moab::Interface::get_adjacencies(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, and MBVERTEX.

Referenced by unpack_entities(), and unpack_remote_handles().

◆ gather_data()

ErrorCode moab::ParallelComm::gather_data ( Range gather_ents,
Tag tag_handle,
Tag  id_tag = 0,
EntityHandle  gather_set = 0,
int  root_proc_rank = 0 
)

Definition at line 8914 of file ParallelComm.cpp.

8919 {
8920  int dim = mbImpl->dimension_from_handle( *gather_ents.begin() );
8921  int bytes_per_tag = 0;
8922  ErrorCode rval = mbImpl->tag_get_bytes( tag_handle, bytes_per_tag );
8923  if( rval != MB_SUCCESS ) return rval;
8924 
8925  int sz_buffer = sizeof( int ) + gather_ents.size() * ( sizeof( int ) + bytes_per_tag );
8926  void* senddata = malloc( sz_buffer );
8927  ( (int*)senddata )[0] = (int)gather_ents.size();
8928  int* ptr_int = (int*)senddata + 1;
8929  rval = mbImpl->tag_get_data( id_tag, gather_ents, (void*)ptr_int );
8930  if( rval != MB_SUCCESS ) return rval;
8931  ptr_int = (int*)( senddata ) + 1 + gather_ents.size();
8932  rval = mbImpl->tag_get_data( tag_handle, gather_ents, (void*)ptr_int );
8933  if( rval != MB_SUCCESS ) return rval;
8934  std::vector< int > displs( proc_config().proc_size(), 0 );
8935  MPI_Gather( &sz_buffer, 1, MPI_INT, &displs[0], 1, MPI_INT, root_proc_rank, comm() );
8936  std::vector< int > recvcnts( proc_config().proc_size(), 0 );
8937  std::copy( displs.begin(), displs.end(), recvcnts.begin() );
8938  std::partial_sum( displs.begin(), displs.end(), displs.begin() );
8939  std::vector< int >::iterator lastM1 = displs.end() - 1;
8940  std::copy_backward( displs.begin(), lastM1, displs.end() );
8941  // std::copy_backward(displs.begin(), --displs.end(), displs.end());
8942  displs[0] = 0;
8943 
8944  if( (int)rank() != root_proc_rank )
8945  MPI_Gatherv( senddata, sz_buffer, MPI_BYTE, NULL, NULL, NULL, MPI_BYTE, root_proc_rank, comm() );
8946  else
8947  {
8948  Range gents;
8949  mbImpl->get_entities_by_dimension( gather_set, dim, gents );
8950  int recvbuffsz = gents.size() * ( bytes_per_tag + sizeof( int ) ) + proc_config().proc_size() * sizeof( int );
8951  void* recvbuf = malloc( recvbuffsz );
8952  MPI_Gatherv( senddata, sz_buffer, MPI_BYTE, recvbuf, &recvcnts[0], &displs[0], MPI_BYTE, root_proc_rank,
8953  comm() );
8954 
8955  void* gvals = NULL;
8956 
8957  // Test whether gents has multiple sequences
8958  bool multiple_sequences = false;
8959  if( gents.psize() > 1 )
8960  multiple_sequences = true;
8961  else
8962  {
8963  int count;
8964  rval = mbImpl->tag_iterate( tag_handle, gents.begin(), gents.end(), count, gvals );
8965  assert( NULL != gvals );
8966  assert( count > 0 );
8967  if( (size_t)count != gents.size() )
8968  {
8969  multiple_sequences = true;
8970  gvals = NULL;
8971  }
8972  }
8973 
8974  // If gents has multiple sequences, create a temp buffer for gathered values
8975  if( multiple_sequences )
8976  {
8977  gvals = malloc( gents.size() * bytes_per_tag );
8978  assert( NULL != gvals );
8979  }
8980 
8981  for( int i = 0; i != (int)size(); i++ )
8982  {
8983  int numents = *(int*)( ( (char*)recvbuf ) + displs[i] );
8984  int* id_ptr = (int*)( ( (char*)recvbuf ) + displs[i] + sizeof( int ) );
8985  char* val_ptr = (char*)( id_ptr + numents );
8986  for( int j = 0; j != numents; j++ )
8987  {
8988  int idx = id_ptr[j];
8989  memcpy( (char*)gvals + ( idx - 1 ) * bytes_per_tag, val_ptr + j * bytes_per_tag, bytes_per_tag );
8990  }
8991  }
8992 
8993  // Free the receive buffer
8994  free( recvbuf );
8995 
8996  // If gents has multiple sequences, copy tag data (stored in the temp buffer) to each
8997  // sequence separately
8998  if( multiple_sequences )
8999  {
9000  Range::iterator iter = gents.begin();
9001  size_t start_idx = 0;
9002  while( iter != gents.end() )
9003  {
9004  int count;
9005  void* ptr;
9006  rval = mbImpl->tag_iterate( tag_handle, iter, gents.end(), count, ptr );
9007  assert( NULL != ptr );
9008  assert( count > 0 );
9009  memcpy( (char*)ptr, (char*)gvals + start_idx * bytes_per_tag, bytes_per_tag * count );
9010 
9011  iter += count;
9012  start_idx += count;
9013  }
9014  assert( start_idx == gents.size() );
9015 
9016  // Free the temp buffer
9017  free( gvals );
9018  }
9019  }
9020 
9021  // Free the send data
9022  free( senddata );
9023 
9024  return MB_SUCCESS;
9025 }

References moab::Range::begin(), comm(), dim, moab::Interface::dimension_from_handle(), moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_dimension(), MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_size(), moab::Range::psize(), rank(), moab::Range::size(), size(), moab::Interface::tag_get_bytes(), moab::Interface::tag_get_data(), and moab::Interface::tag_iterate().

◆ get_all_pcomm()

ErrorCode moab::ParallelComm::get_all_pcomm ( Interface impl,
std::vector< ParallelComm * > &  list 
)
static

Definition at line 8023 of file ParallelComm.cpp.

8024 {
8025  Tag pc_tag = pcomm_tag( impl, false );
8026  if( 0 == pc_tag ) return MB_TAG_NOT_FOUND;
8027 
8028  const EntityHandle root = 0;
8029  ParallelComm* pc_array[MAX_SHARING_PROCS];
8030  ErrorCode rval = impl->tag_get_data( pc_tag, &root, 1, pc_array );
8031  if( MB_SUCCESS != rval ) return rval;
8032 
8033  for( int i = 0; i < MAX_SHARING_PROCS; i++ )
8034  {
8035  if( pc_array[i] ) list.push_back( pc_array[i] );
8036  }
8037 
8038  return MB_SUCCESS;
8039 }

References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, MB_TAG_NOT_FOUND, pcomm_tag(), and moab::Interface::tag_get_data().

Referenced by moab::Core::deinitialize().

◆ get_buffers()

int moab::ParallelComm::get_buffers ( int  to_proc,
bool *  is_new = NULL 
)

get (and possibly allocate) buffers for messages to/from to_proc; returns index of to_proc in buffProcs vector; if is_new is non-NULL, sets to whether new buffer was allocated PUBLIC ONLY FOR TESTING!

Definition at line 514 of file ParallelComm.cpp.

515 {
516  int ind = -1;
517  std::vector< unsigned int >::iterator vit = std::find( buffProcs.begin(), buffProcs.end(), to_proc );
518  if( vit == buffProcs.end() )
519  {
520  assert( "shouldn't need buffer to myself" && to_proc != (int)procConfig.proc_rank() );
521  ind = buffProcs.size();
522  buffProcs.push_back( (unsigned int)to_proc );
523  localOwnedBuffs.push_back( new Buffer( INITIAL_BUFF_SIZE ) );
524  remoteOwnedBuffs.push_back( new Buffer( INITIAL_BUFF_SIZE ) );
525  if( is_new ) *is_new = true;
526  }
527  else
528  {
529  ind = vit - buffProcs.begin();
530  if( is_new ) *is_new = false;
531  }
532  assert( ind < MAX_SHARING_PROCS );
533  return ind;
534 }

References buffProcs, INITIAL_BUFF_SIZE, localOwnedBuffs, MAX_SHARING_PROCS, moab::ProcConfig::proc_rank(), procConfig, and remoteOwnedBuffs.

Referenced by check_all_shared_handles(), correct_thin_ghost_layers(), exchange_owned_mesh(), get_interface_procs(), pack_shared_handles(), post_irecv(), recv_entities(), recv_messages(), recv_remote_handle_messages(), send_entities(), send_recv_entities(), moab::ScdInterface::tag_shared_vertices(), and unpack_entities().

◆ get_comm_procs()

ErrorCode moab::ParallelComm::get_comm_procs ( std::set< unsigned int > &  procs)
inline

get processors with which this processor communicates

Definition at line 1633 of file ParallelComm.hpp.

1634 {
1635  ErrorCode result = get_interface_procs( procs );
1636  if( MB_SUCCESS != result ) return result;
1637 
1638  std::copy( buffProcs.begin(), buffProcs.end(), std::inserter( procs, procs.begin() ) );
1639 
1640  return MB_SUCCESS;
1641 }

References buffProcs, ErrorCode, get_interface_procs(), and MB_SUCCESS.

Referenced by exchange_tags(), reduce_tags(), and settle_intersection_points().

◆ get_debug_verbosity()

int moab::ParallelComm::get_debug_verbosity ( )

get the verbosity level of output from this pcomm

Definition at line 8872 of file ParallelComm.cpp.

8873 {
8874  return myDebug->get_verbosity();
8875 }

References moab::DebugOutput::get_verbosity(), and myDebug.

Referenced by augment_default_sets_with_ghosts(), moab::ScdInterface::construct_box(), and moab::ScdInterface::tag_shared_vertices().

◆ get_entityset_local_handle()

ErrorCode moab::ParallelComm::get_entityset_local_handle ( unsigned  owning_rank,
EntityHandle  remote_handle,
EntityHandle local_handle 
) const

Given set owner and handle on owner, find local set handle.

Definition at line 8892 of file ParallelComm.cpp.

8895 {
8896  return sharedSetData->get_local_handle( owning_rank, remote_handle, local_handle );
8897 }

References moab::SharedSetData::get_local_handle(), and sharedSetData.

Referenced by moab::WriteHDF5Parallel::communicate_shared_set_ids().

◆ get_entityset_owner()

ErrorCode moab::ParallelComm::get_entityset_owner ( EntityHandle  entity_set,
unsigned &  owner_rank,
EntityHandle remote_handle = 0 
) const

Get rank of the owner of a shared set. Returns this proc if set is not shared. Optionally returns handle on owning process for shared set.

Definition at line 8882 of file ParallelComm.cpp.

8885 {
8886  if( remote_handle )
8887  return sharedSetData->get_owner( entity_set, owner_rank, *remote_handle );
8888  else
8889  return sharedSetData->get_owner( entity_set, owner_rank );
8890 }

References moab::SharedSetData::get_owner(), and sharedSetData.

Referenced by moab::WriteHDF5Parallel::communicate_shared_set_data(), moab::WriteHDF5Parallel::communicate_shared_set_ids(), and moab::WriteHDF5Parallel::print_set_sharing_data().

◆ get_entityset_owners()

ErrorCode moab::ParallelComm::get_entityset_owners ( std::vector< unsigned > &  ranks) const

Get ranks of all processes that own at least one set that is shared with this process. Will include the rank of this process if this process owns any shared set.

Definition at line 8904 of file ParallelComm.cpp.

8905 {
8906  return sharedSetData->get_owning_procs( ranks );
8907 }

References moab::SharedSetData::get_owning_procs(), and sharedSetData.

Referenced by moab::WriteHDF5Parallel::communicate_shared_set_ids().

◆ get_entityset_procs()

ErrorCode moab::ParallelComm::get_entityset_procs ( EntityHandle  entity_set,
std::vector< unsigned > &  ranks 
) const

Get array of process IDs sharing a set. Returns zero and passes back NULL if set is not shared.

Definition at line 8877 of file ParallelComm.cpp.

8878 {
8879  return sharedSetData->get_sharing_procs( set, ranks );
8880 }

References moab::SharedSetData::get_sharing_procs(), and sharedSetData.

Referenced by moab::WriteHDF5Parallel::communicate_shared_set_data(), moab::WriteHDF5Parallel::communicate_shared_set_ids(), and moab::WriteHDF5Parallel::print_set_sharing_data().

◆ get_ghosted_entities()

ErrorCode moab::ParallelComm::get_ghosted_entities ( int  bridge_dim,
int  ghost_dim,
int  to_proc,
int  num_layers,
int  addl_ents,
Range ghosted_ents 
)
private

for specified bridge/ghost dimension, to_proc, and number of layers, get the entities to be ghosted, and info on additional procs needing to communicate with to_proc

Definition at line 7437 of file ParallelComm.cpp.

7443 {
7444  // Get bridge ents on interface(s)
7445  Range from_ents;
7446  ErrorCode result = MB_SUCCESS;
7447  assert( 0 < num_layers );
7448  for( Range::iterator rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit )
7449  {
7450  if( !is_iface_proc( *rit, to_proc ) ) continue;
7451 
7452  // Get starting "from" entities
7453  if( bridge_dim == -1 )
7454  {
7455  result = mbImpl->get_entities_by_handle( *rit, from_ents );MB_CHK_SET_ERR( result, "Failed to get bridge ents in the set" );
7456  }
7457  else
7458  {
7459  result = mbImpl->get_entities_by_dimension( *rit, bridge_dim, from_ents );MB_CHK_SET_ERR( result, "Failed to get bridge ents in the set" );
7460  }
7461 
7462  // Need to get layers of bridge-adj entities
7463  if( from_ents.empty() ) continue;
7464  result =
7465  MeshTopoUtil( mbImpl ).get_bridge_adjacencies( from_ents, bridge_dim, ghost_dim, ghosted_ents, num_layers );MB_CHK_SET_ERR( result, "Failed to get bridge adjacencies" );
7466  }
7467 
7468  result = add_verts( ghosted_ents );MB_CHK_SET_ERR( result, "Failed to add verts" );
7469 
7470  if( addl_ents )
7471  {
7472  // First get the ents of ghost_dim
7473  Range tmp_ents, tmp_owned, tmp_notowned;
7474  tmp_owned = ghosted_ents.subset_by_dimension( ghost_dim );
7475  if( tmp_owned.empty() ) return result;
7476 
7477  tmp_notowned = tmp_owned;
7478 
7479  // Next, filter by pstatus; can only create adj entities for entities I own
7480  result = filter_pstatus( tmp_owned, PSTATUS_NOT_OWNED, PSTATUS_NOT, -1, &tmp_owned );MB_CHK_SET_ERR( result, "Failed to filter owned entities" );
7481 
7482  tmp_notowned -= tmp_owned;
7483 
7484  // Get edges first
7485  if( 1 == addl_ents || 3 == addl_ents )
7486  {
7487  result = mbImpl->get_adjacencies( tmp_owned, 1, true, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get edge adjacencies for owned ghost entities" );
7488  result = mbImpl->get_adjacencies( tmp_notowned, 1, false, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get edge adjacencies for notowned ghost entities" );
7489  }
7490  if( 2 == addl_ents || 3 == addl_ents )
7491  {
7492  result = mbImpl->get_adjacencies( tmp_owned, 2, true, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get face adjacencies for owned ghost entities" );
7493  result = mbImpl->get_adjacencies( tmp_notowned, 2, false, tmp_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get face adjacencies for notowned ghost entities" );
7494  }
7495 
7496  ghosted_ents.merge( tmp_ents );
7497  }
7498 
7499  return result;
7500 }

References add_verts(), moab::Range::begin(), moab::Range::empty(), moab::Range::end(), ErrorCode, filter_pstatus(), moab::Interface::get_adjacencies(), moab::MeshTopoUtil::get_bridge_adjacencies(), moab::Interface::get_entities_by_dimension(), moab::Interface::get_entities_by_handle(), interfaceSets, is_iface_proc(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::Range::merge(), PSTATUS_NOT, PSTATUS_NOT_OWNED, moab::Range::subset_by_dimension(), and moab::Interface::UNION.

Referenced by get_sent_ents().

◆ get_global_part_count()

ErrorCode moab::ParallelComm::get_global_part_count ( int &  count_out) const

Definition at line 8182 of file ParallelComm.cpp.

8183 {
8184  count_out = globalPartCount;
8185  return count_out < 0 ? MB_FAILURE : MB_SUCCESS;
8186 }

References globalPartCount, and MB_SUCCESS.

◆ get_id()

int moab::ParallelComm::get_id ( ) const
inline

Get ID used to reference this PCOMM instance.

Definition at line 70 of file ParallelComm.hpp.

71  {
72  return pcommID;
73  }

References pcommID.

Referenced by iMOAB_RegisterApplication(), and DeformMeshRemap::read_file().

◆ get_iface_entities()

ErrorCode moab::ParallelComm::get_iface_entities ( int  other_proc,
int  dim,
Range iface_ents 
)

Get entities on interfaces shared with another proc.

Parameters
other_procOther proc sharing the interface
dimDimension of entities to return, -1 if all dims
iface_entsReturned entities

Definition at line 7275 of file ParallelComm.cpp.

7276 {
7277  Range iface_sets;
7278  ErrorCode result = MB_SUCCESS;
7279 
7280  for( Range::iterator rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit )
7281  {
7282  if( -1 != other_proc && !is_iface_proc( *rit, other_proc ) ) continue;
7283 
7284  if( -1 == dim )
7285  {
7286  result = mbImpl->get_entities_by_handle( *rit, iface_ents );MB_CHK_SET_ERR( result, "Failed to get entities in iface set" );
7287  }
7288  else
7289  {
7290  result = mbImpl->get_entities_by_dimension( *rit, dim, iface_ents );MB_CHK_SET_ERR( result, "Failed to get entities in iface set" );
7291  }
7292  }
7293 
7294  return MB_SUCCESS;
7295 }

References moab::Range::begin(), dim, moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_dimension(), moab::Interface::get_entities_by_handle(), interfaceSets, is_iface_proc(), MB_CHK_SET_ERR, MB_SUCCESS, and mbImpl.

Referenced by get_sent_ents().

◆ get_interface_procs()

ErrorCode moab::ParallelComm::get_interface_procs ( std::set< unsigned int > &  iface_procs,
const bool  get_buffs = false 
)

get processors with which this processor shares an interface

Get processors with which this processor communicates; sets are sorted by processor.

Definition at line 5440 of file ParallelComm.cpp.

5441 {
5442  // Make sure the sharing procs vector is empty
5443  procs_set.clear();
5444 
5445  // Pre-load vector of single-proc tag values
5446  unsigned int i, j;
5447  std::vector< int > iface_proc( interfaceSets.size() );
5448  ErrorCode result = mbImpl->tag_get_data( sharedp_tag(), interfaceSets, &iface_proc[0] );MB_CHK_SET_ERR( result, "Failed to get iface_proc for iface sets" );
5449 
5450  // Get sharing procs either from single-proc vector or by getting
5451  // multi-proc tag value
5452  int tmp_iface_procs[MAX_SHARING_PROCS];
5453  std::fill( tmp_iface_procs, tmp_iface_procs + MAX_SHARING_PROCS, -1 );
5454  Range::iterator rit;
5455  for( rit = interfaceSets.begin(), i = 0; rit != interfaceSets.end(); ++rit, i++ )
5456  {
5457  if( -1 != iface_proc[i] )
5458  {
5459  assert( iface_proc[i] != (int)procConfig.proc_rank() );
5460  procs_set.insert( (unsigned int)iface_proc[i] );
5461  }
5462  else
5463  {
5464  // Get the sharing_procs tag
5465  result = mbImpl->tag_get_data( sharedps_tag(), &( *rit ), 1, tmp_iface_procs );MB_CHK_SET_ERR( result, "Failed to get iface_procs for iface set" );
5466  for( j = 0; j < MAX_SHARING_PROCS; j++ )
5467  {
5468  if( -1 != tmp_iface_procs[j] && tmp_iface_procs[j] != (int)procConfig.proc_rank() )
5469  procs_set.insert( (unsigned int)tmp_iface_procs[j] );
5470  else if( -1 == tmp_iface_procs[j] )
5471  {
5472  std::fill( tmp_iface_procs, tmp_iface_procs + j, -1 );
5473  break;
5474  }
5475  }
5476  }
5477  }
5478 
5479  if( get_buffs )
5480  {
5481  for( std::set< unsigned int >::iterator sit = procs_set.begin(); sit != procs_set.end(); ++sit )
5482  get_buffers( *sit );
5483  }
5484 
5485  return MB_SUCCESS;
5486 }

References moab::Range::begin(), moab::Range::end(), ErrorCode, get_buffers(), interfaceSets, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::ProcConfig::proc_rank(), procConfig, sharedp_tag(), sharedps_tag(), moab::Range::size(), and moab::Interface::tag_get_data().

Referenced by get_comm_procs(), resolve_shared_ents(), and moab::ParallelMergeMesh::TagSharedElements().

◆ get_interface_sets()

ErrorCode moab::ParallelComm::get_interface_sets ( EntityHandle  part,
Range iface_sets_out,
int *  adj_part_id = 0 
)

Definition at line 8310 of file ParallelComm.cpp.

8311 {
8312  // FIXME : assumes one part per processor.
8313  // Need to store part iface sets as children to implement
8314  // this correctly.
8315  iface_sets_out = interface_sets();
8316 
8317  if( adj_part_id )
8318  {
8319  int part_ids[MAX_SHARING_PROCS], num_parts;
8320  Range::iterator i = iface_sets_out.begin();
8321  while( i != iface_sets_out.end() )
8322  {
8323  unsigned char pstat;
8324  ErrorCode rval = get_sharing_data( *i, part_ids, NULL, pstat, num_parts );
8325  if( MB_SUCCESS != rval ) return rval;
8326 
8327  if( std::find( part_ids, part_ids + num_parts, *adj_part_id ) - part_ids != num_parts )
8328  ++i;
8329  else
8330  i = iface_sets_out.erase( i );
8331  }
8332  }
8333 
8334  return MB_SUCCESS;
8335 }

References moab::Range::begin(), moab::Range::end(), moab::Range::erase(), ErrorCode, get_sharing_data(), interface_sets(), MAX_SHARING_PROCS, and MB_SUCCESS.

Referenced by get_part_neighbor_ids().

◆ get_local_handles() [1/3]

ErrorCode moab::ParallelComm::get_local_handles ( const Range remote_handles,
Range local_handles,
const std::vector< EntityHandle > &  new_ents 
)
private

same as above except puts results in range

Definition at line 3090 of file ParallelComm.cpp.

3093 {
3094  std::vector< EntityHandle > rh_vec;
3095  rh_vec.reserve( remote_handles.size() );
3096  std::copy( remote_handles.begin(), remote_handles.end(), std::back_inserter( rh_vec ) );
3097  ErrorCode result = get_local_handles( &rh_vec[0], remote_handles.size(), new_ents );
3098  std::copy( rh_vec.begin(), rh_vec.end(), range_inserter( local_handles ) );
3099  return result;
3100 }

References moab::Range::begin(), moab::Range::end(), ErrorCode, get_local_handles(), and moab::Range::size().

◆ get_local_handles() [2/3]

ErrorCode moab::ParallelComm::get_local_handles ( EntityHandle from_vec,
int  num_ents,
const Range new_ents 
)
private

goes through from_vec, and for any with type MBMAXTYPE, replaces with new_ents value at index corresponding to id of entity in from_vec

Definition at line 3102 of file ParallelComm.cpp.

3103 {
3104  std::vector< EntityHandle > tmp_ents;
3105  std::copy( new_ents.begin(), new_ents.end(), std::back_inserter( tmp_ents ) );
3106  return get_local_handles( from_vec, num_ents, tmp_ents );
3107 }

References moab::Range::begin(), and moab::Range::end().

Referenced by get_local_handles(), unpack_entities(), unpack_sets(), and unpack_tags().

◆ get_local_handles() [3/3]

ErrorCode moab::ParallelComm::get_local_handles ( EntityHandle from_vec,
int  num_ents,
const std::vector< EntityHandle > &  new_ents 
)
private

same as above except gets new_ents from vector

Definition at line 3109 of file ParallelComm.cpp.

3112 {
3113  for( int i = 0; i < num_ents; i++ )
3114  {
3115  if( TYPE_FROM_HANDLE( from_vec[i] ) == MBMAXTYPE )
3116  {
3117  assert( ID_FROM_HANDLE( from_vec[i] ) < (int)new_ents.size() );
3118  from_vec[i] = new_ents[ID_FROM_HANDLE( from_vec[i] )];
3119  }
3120  }
3121 
3122  return MB_SUCCESS;
3123 }

References moab::ID_FROM_HANDLE(), MB_SUCCESS, MBMAXTYPE, and moab::TYPE_FROM_HANDLE().

◆ get_moab()

◆ get_owned_sets()

ErrorCode moab::ParallelComm::get_owned_sets ( unsigned  owning_rank,
Range sets_out 
) const

Get shared sets owned by process with specified rank.

Definition at line 8909 of file ParallelComm.cpp.

8910 {
8911  return sharedSetData->get_shared_sets( owning_rank, sets_out );
8912 }

References moab::SharedSetData::get_shared_sets(), and sharedSetData.

Referenced by moab::WriteHDF5Parallel::communicate_shared_set_ids(), and moab::WriteHDF5Parallel::create_meshset_tables().

◆ get_owner()

ErrorCode moab::ParallelComm::get_owner ( EntityHandle  entity,
int &  owner 
)
inline

Return the rank of the entity owner.

Definition at line 1643 of file ParallelComm.hpp.

1644 {
1645  EntityHandle tmp_handle;
1646  return get_owner_handle( entity, owner, tmp_handle );
1647 }

References get_owner_handle().

Referenced by moab::WriteHDF5Parallel::exchange_file_ids(), iMOAB_GetElementOwnership(), iMOAB_GetVertexOwnership(), iMOAB_GetVisibleElementsInfo(), and pack_shared_handles().

◆ get_owner_handle()

ErrorCode moab::ParallelComm::get_owner_handle ( EntityHandle  entity,
int &  owner,
EntityHandle handle 
)

Return the owner processor and handle of a given entity.

Return the rank of the entity owner.

Definition at line 8147 of file ParallelComm.cpp.

8148 {
8149  unsigned char pstat;
8150  int sharing_procs[MAX_SHARING_PROCS];
8151  EntityHandle sharing_handles[MAX_SHARING_PROCS];
8152 
8153  ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), &entity, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" );
8154  if( !( pstat & PSTATUS_NOT_OWNED ) )
8155  {
8156  owner = proc_config().proc_rank();
8157  handle = entity;
8158  }
8159  else if( pstat & PSTATUS_MULTISHARED )
8160  {
8161  result = mbImpl->tag_get_data( sharedps_tag(), &entity, 1, sharing_procs );MB_CHK_SET_ERR( result, "Failed to get sharedps tag data" );
8162  owner = sharing_procs[0];
8163  result = mbImpl->tag_get_data( sharedhs_tag(), &entity, 1, sharing_handles );MB_CHK_SET_ERR( result, "Failed to get sharedhs tag data" );
8164  handle = sharing_handles[0];
8165  }
8166  else if( pstat & PSTATUS_SHARED )
8167  {
8168  result = mbImpl->tag_get_data( sharedp_tag(), &entity, 1, sharing_procs );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" );
8169  owner = sharing_procs[0];
8170  result = mbImpl->tag_get_data( sharedh_tag(), &entity, 1, sharing_handles );MB_CHK_SET_ERR( result, "Failed to get sharedh tag data" );
8171  handle = sharing_handles[0];
8172  }
8173  else
8174  {
8175  owner = -1;
8176  handle = 0;
8177  }
8178 
8179  return MB_SUCCESS;
8180 }

References ErrorCode, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_rank(), PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), and moab::Interface::tag_get_data().

Referenced by get_owner().

◆ get_owning_part()

ErrorCode moab::ParallelComm::get_owning_part ( EntityHandle  entity,
int &  owning_part_id_out,
EntityHandle owning_handle = 0 
)

Definition at line 8337 of file ParallelComm.cpp.

8338 {
8339  // FIXME : assumes one part per proc, and therefore part_id == rank
8340 
8341  // If entity is not shared, then we're the owner.
8342  unsigned char pstat;
8343  ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), &handle, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" );
8344  if( !( pstat & PSTATUS_NOT_OWNED ) )
8345  {
8346  owning_part_id = proc_config().proc_rank();
8347  if( remote_handle ) *remote_handle = handle;
8348  return MB_SUCCESS;
8349  }
8350 
8351  // If entity is shared with one other proc, then
8352  // sharedp_tag will contain a positive value.
8353  result = mbImpl->tag_get_data( sharedp_tag(), &handle, 1, &owning_part_id );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" );
8354  if( owning_part_id != -1 )
8355  {
8356  // Done?
8357  if( !remote_handle ) return MB_SUCCESS;
8358 
8359  // Get handles on remote processors (and this one)
8360  return mbImpl->tag_get_data( sharedh_tag(), &handle, 1, remote_handle );
8361  }
8362 
8363  // If here, then the entity is shared with at least two other processors.
8364  // Get the list from the sharedps_tag
8365  const void* part_id_list = 0;
8366  result = mbImpl->tag_get_by_ptr( sharedps_tag(), &handle, 1, &part_id_list );
8367  if( MB_SUCCESS != result ) return result;
8368  owning_part_id = ( (const int*)part_id_list )[0];
8369 
8370  // Done?
8371  if( !remote_handle ) return MB_SUCCESS;
8372 
8373  // Get remote handles
8374  const void* handle_list = 0;
8375  result = mbImpl->tag_get_by_ptr( sharedhs_tag(), &handle, 1, &handle_list );
8376  if( MB_SUCCESS != result ) return result;
8377 
8378  *remote_handle = ( (const EntityHandle*)handle_list )[0];
8379  return MB_SUCCESS;
8380 }

References ErrorCode, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_rank(), PSTATUS_NOT_OWNED, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), moab::Interface::tag_get_by_ptr(), and moab::Interface::tag_get_data().

◆ get_part_entities()

ErrorCode moab::ParallelComm::get_part_entities ( Range ents,
int  dim = -1 
)

return all the entities in parts owned locally

Definition at line 8126 of file ParallelComm.cpp.

8127 {
8128  ErrorCode result;
8129 
8130  for( Range::iterator rit = partitionSets.begin(); rit != partitionSets.end(); ++rit )
8131  {
8132  Range tmp_ents;
8133  if( -1 == dim )
8134  result = mbImpl->get_entities_by_handle( *rit, tmp_ents, true );
8135  else
8136  result = mbImpl->get_entities_by_dimension( *rit, dim, tmp_ents, true );
8137 
8138  if( MB_SUCCESS != result ) return result;
8139  ents.merge( tmp_ents );
8140  }
8141 
8142  return MB_SUCCESS;
8143 }

References moab::Range::begin(), dim, moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_dimension(), moab::Interface::get_entities_by_handle(), MB_SUCCESS, mbImpl, moab::Range::merge(), and partitionSets.

Referenced by moab::Coupler::initialize_tree(), and main().

◆ get_part_handle()

ErrorCode moab::ParallelComm::get_part_handle ( int  id,
EntityHandle handle_out 
) const

Definition at line 8202 of file ParallelComm.cpp.

8203 {
8204  // FIXME: assumes only 1 local part
8205  if( (unsigned)id != proc_config().proc_rank() ) return MB_ENTITY_NOT_FOUND;
8206  handle_out = partition_sets().front();
8207  return MB_SUCCESS;
8208 }

References moab::Range::front(), MB_ENTITY_NOT_FOUND, MB_SUCCESS, partition_sets(), and proc_config().

Referenced by assign_entities_part(), and remove_entities_part().

◆ get_part_id()

ErrorCode moab::ParallelComm::get_part_id ( EntityHandle  part,
int &  id_out 
) const

Definition at line 8195 of file ParallelComm.cpp.

8196 {
8197  // FIXME: assumes only 1 local part
8198  id_out = proc_config().proc_rank();
8199  return MB_SUCCESS;
8200 }

References MB_SUCCESS, proc_config(), and moab::ProcConfig::proc_rank().

Referenced by get_part_neighbor_ids().

◆ get_part_neighbor_ids()

ErrorCode moab::ParallelComm::get_part_neighbor_ids ( EntityHandle  part,
int  neighbors_out[MAX_SHARING_PROCS],
int &  num_neighbors_out 
)

Definition at line 8276 of file ParallelComm.cpp.

8279 {
8280  ErrorCode rval;
8281  Range iface;
8282  rval = get_interface_sets( part, iface );
8283  if( MB_SUCCESS != rval ) return rval;
8284 
8285  num_neighbors_out = 0;
8286  int n, j = 0;
8287  int tmp[MAX_SHARING_PROCS] = { 0 }, curr[MAX_SHARING_PROCS] = { 0 };
8288  int* parts[2] = { neighbors_out, tmp };
8289  for( Range::iterator i = iface.begin(); i != iface.end(); ++i )
8290  {
8291  unsigned char pstat;
8292  rval = get_sharing_data( *i, curr, NULL, pstat, n );
8293  if( MB_SUCCESS != rval ) return rval;
8294  std::sort( curr, curr + n );
8295  assert( num_neighbors_out < MAX_SHARING_PROCS );
8296  int* k = std::set_union( parts[j], parts[j] + num_neighbors_out, curr, curr + n, parts[1 - j] );
8297  j = 1 - j;
8298  num_neighbors_out = k - parts[j];
8299  }
8300  if( parts[j] != neighbors_out ) std::copy( parts[j], parts[j] + num_neighbors_out, neighbors_out );
8301 
8302  // Remove input part from list
8303  int id;
8304  rval = get_part_id( part, id );
8305  if( MB_SUCCESS == rval )
8306  num_neighbors_out = std::remove( neighbors_out, neighbors_out + num_neighbors_out, id ) - neighbors_out;
8307  return rval;
8308 }

References ErrorCode, get_interface_sets(), get_part_id(), get_sharing_data(), iface, MAX_SHARING_PROCS, and MB_SUCCESS.

◆ get_part_owner()

ErrorCode moab::ParallelComm::get_part_owner ( int  part_id,
int &  owner_out 
) const

Definition at line 8188 of file ParallelComm.cpp.

8189 {
8190  // FIXME: assumes only 1 local part
8191  owner = part_id;
8192  return MB_SUCCESS;
8193 }

References MB_SUCCESS.

◆ get_partitioning()

EntityHandle moab::ParallelComm::get_partitioning ( ) const
inline

Definition at line 725 of file ParallelComm.hpp.

726  {
727  return partitioningSet;
728  }

References partitioningSet.

Referenced by create_part(), and destroy_part().

◆ get_pcomm() [1/2]

ParallelComm * moab::ParallelComm::get_pcomm ( Interface impl,
const int  index 
)
static

◆ get_pcomm() [2/2]

ParallelComm * moab::ParallelComm::get_pcomm ( Interface impl,
EntityHandle  partitioning,
const MPI_Comm *  comm = 0 
)
static

Get ParallelComm instance associated with partition handle Will create ParallelComm instance if a) one does not already exist and b) a valid value for MPI_Comm is passed.

get the indexed pcomm object from the interface

Definition at line 8042 of file ParallelComm.cpp.

8043 {
8044  ErrorCode rval;
8045  ParallelComm* result = 0;
8046 
8047  Tag prtn_tag;
8048  rval =
8049  impl->tag_get_handle( PARTITIONING_PCOMM_TAG_NAME, 1, MB_TYPE_INTEGER, prtn_tag, MB_TAG_SPARSE | MB_TAG_CREAT );
8050  if( MB_SUCCESS != rval ) return 0;
8051 
8052  int pcomm_id;
8053  rval = impl->tag_get_data( prtn_tag, &prtn, 1, &pcomm_id );
8054  if( MB_SUCCESS == rval )
8055  {
8056  result = get_pcomm( impl, pcomm_id );
8057  }
8058  else if( MB_TAG_NOT_FOUND == rval && comm )
8059  {
8060  result = new ParallelComm( impl, *comm, &pcomm_id );
8061  if( !result ) return 0;
8062  result->set_partitioning( prtn );
8063 
8064  rval = impl->tag_set_data( prtn_tag, &prtn, 1, &pcomm_id );
8065  if( MB_SUCCESS != rval )
8066  {
8067  delete result;
8068  result = 0;
8069  }
8070  }
8071 
8072  return result;
8073 }

References comm(), ErrorCode, get_pcomm(), MB_SUCCESS, MB_TAG_CREAT, MB_TAG_NOT_FOUND, MB_TAG_SPARSE, MB_TYPE_INTEGER, ParallelComm(), moab::PARTITIONING_PCOMM_TAG_NAME, set_partitioning(), moab::Interface::tag_get_data(), moab::Interface::tag_get_handle(), and moab::Interface::tag_set_data().

◆ get_proc_nvecs()

ErrorCode moab::ParallelComm::get_proc_nvecs ( int  resolve_dim,
int  shared_dim,
Range skin_ents,
std::map< std::vector< int >, std::vector< EntityHandle > > &  proc_nvecs 
)
private

Definition at line 5156 of file ParallelComm.cpp.

5160 {
5161  // Set sharing procs tags on other skin ents
5162  ErrorCode result;
5163  const EntityHandle* connect;
5164  int num_connect;
5165  std::set< int > sharing_procs;
5166  std::vector< EntityHandle > dum_connect;
5167  std::vector< int > sp_vec;
5168 
5169  for( int d = 3; d > 0; d-- )
5170  {
5171  if( resolve_dim == d ) continue;
5172 
5173  for( Range::iterator rit = skin_ents[d].begin(); rit != skin_ents[d].end(); ++rit )
5174  {
5175  // Get connectivity
5176  result = mbImpl->get_connectivity( *rit, connect, num_connect, false, &dum_connect );MB_CHK_SET_ERR( result, "Failed to get connectivity on non-vertex skin entities" );
5177 
5178  int op = ( resolve_dim < shared_dim ? Interface::UNION : Interface::INTERSECT );
5179  result = get_sharing_data( connect, num_connect, sharing_procs, op );MB_CHK_SET_ERR( result, "Failed to get sharing data in get_proc_nvecs" );
5180  if( sharing_procs.empty() ||
5181  ( sharing_procs.size() == 1 && *sharing_procs.begin() == (int)procConfig.proc_rank() ) )
5182  continue;
5183 
5184  // Need to specify sharing data correctly for entities or they will
5185  // end up in a different interface set than corresponding vertices
5186  if( sharing_procs.size() == 2 )
5187  {
5188  std::set< int >::iterator it = sharing_procs.find( proc_config().proc_rank() );
5189  assert( it != sharing_procs.end() );
5190  sharing_procs.erase( it );
5191  }
5192 
5193  // Intersection is the owning proc(s) for this skin ent
5194  sp_vec.clear();
5195  std::copy( sharing_procs.begin(), sharing_procs.end(), std::back_inserter( sp_vec ) );
5196  assert( sp_vec.size() != 2 );
5197  proc_nvecs[sp_vec].push_back( *rit );
5198  }
5199  }
5200 
5201 #ifndef NDEBUG
5202  // Shouldn't be any repeated entities in any of the vectors in proc_nvecs
5203  for( std::map< std::vector< int >, std::vector< EntityHandle > >::iterator mit = proc_nvecs.begin();
5204  mit != proc_nvecs.end(); ++mit )
5205  {
5206  std::vector< EntityHandle > tmp_vec = ( mit->second );
5207  std::sort( tmp_vec.begin(), tmp_vec.end() );
5208  std::vector< EntityHandle >::iterator vit = std::unique( tmp_vec.begin(), tmp_vec.end() );
5209  assert( vit == tmp_vec.end() );
5210  }
5211 #endif
5212 
5213  return MB_SUCCESS;
5214 }

References moab::Range::end(), ErrorCode, moab::Interface::get_connectivity(), get_sharing_data(), moab::Interface::INTERSECT, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_rank(), procConfig, and moab::Interface::UNION.

Referenced by create_interface_sets(), resolve_shared_ents(), and moab::ParallelMergeMesh::TagSharedElements().

◆ get_pstatus()

ErrorCode moab::ParallelComm::get_pstatus ( EntityHandle  entity,
unsigned char &  pstatus_val 
)

Get parallel status of an entity Returns the parallel status of an entity.

Parameters
entityThe entity being queried
pstatus_valParallel status of the entity

Definition at line 5488 of file ParallelComm.cpp.

5489 {
5490  ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), &entity, 1, &pstatus_val );MB_CHK_SET_ERR( result, "Failed to get pastatus tag data" );
5491  return result;
5492 }

References ErrorCode, MB_CHK_SET_ERR, mbImpl, pstatus_tag(), and moab::Interface::tag_get_data().

Referenced by check_my_shared_handles().

◆ get_pstatus_entities()

ErrorCode moab::ParallelComm::get_pstatus_entities ( int  dim,
unsigned char  pstatus_val,
Range pstatus_ents 
)

Get entities with the given pstatus bit(s) set Returns any entities whose pstatus tag value v satisfies (v & pstatus_val)

Parameters
dimDimension of entities to be returned, or -1 if any
pstatus_valpstatus value of desired entities
pstatus_entsEntities returned from function

Definition at line 5494 of file ParallelComm.cpp.

5495 {
5496  Range ents;
5497  ErrorCode result;
5498 
5499  if( -1 == dim )
5500  {
5501  result = mbImpl->get_entities_by_handle( 0, ents );MB_CHK_SET_ERR( result, "Failed to get all entities" );
5502  }
5503  else
5504  {
5505  result = mbImpl->get_entities_by_dimension( 0, dim, ents );MB_CHK_SET_ERR( result, "Failed to get entities of dimension " << dim );
5506  }
5507 
5508  std::vector< unsigned char > pstatus( ents.size() );
5509  result = mbImpl->tag_get_data( pstatus_tag(), ents, &pstatus[0] );MB_CHK_SET_ERR( result, "Failed to get pastatus tag data" );
5510  Range::iterator rit = ents.begin();
5511  int i = 0;
5512  if( pstatus_val )
5513  {
5514  for( ; rit != ents.end(); i++, ++rit )
5515  {
5516  if( pstatus[i] & pstatus_val && ( -1 == dim || mbImpl->dimension_from_handle( *rit ) == dim ) )
5517  pstatus_ents.insert( *rit );
5518  }
5519  }
5520  else
5521  {
5522  for( ; rit != ents.end(); i++, ++rit )
5523  {
5524  if( !pstatus[i] && ( -1 == dim || mbImpl->dimension_from_handle( *rit ) == dim ) )
5525  pstatus_ents.insert( *rit );
5526  }
5527  }
5528 
5529  return MB_SUCCESS;
5530 }

References moab::Range::begin(), dim, moab::Interface::dimension_from_handle(), moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_dimension(), moab::Interface::get_entities_by_handle(), moab::Range::insert(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, pstatus_tag(), moab::Range::size(), and moab::Interface::tag_get_data().

Referenced by main().

◆ get_remote_handles() [1/4]

ErrorCode moab::ParallelComm::get_remote_handles ( const bool  store_remote_handles,
const Range from_range,
EntityHandle to_vec,
int  to_proc,
const std::vector< EntityHandle > &  new_ents 
)
private

same as other version, except packs range into vector

Definition at line 1974 of file ParallelComm.cpp.

1979 {
1980  // NOTE: THIS IMPLEMENTATION IS JUST LIKE THE VECTOR-BASED VERSION, NO REUSE
1981  // AT THIS TIME, SO IF YOU FIX A BUG IN THIS VERSION, IT MAY BE IN THE
1982  // OTHER VERSION TOO!!!
1983  if( from_range.empty() ) return MB_SUCCESS;
1984 
1985  if( !store_remote_handles )
1986  {
1987  int err;
1988  // In this case, substitute position in new_ents list
1989  Range::iterator rit;
1990  unsigned int i;
1991  for( rit = from_range.begin(), i = 0; rit != from_range.end(); ++rit, i++ )
1992  {
1993  int ind = std::lower_bound( new_ents.begin(), new_ents.end(), *rit ) - new_ents.begin();
1994  assert( new_ents[ind] == *rit );
1995  to_vec[i] = CREATE_HANDLE( MBMAXTYPE, ind, err );
1996  assert( to_vec[i] != 0 && !err && -1 != ind );
1997  }
1998  }
1999  else
2000  {
2001  Tag shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag;
2002  ErrorCode result = get_shared_proc_tags( shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag );MB_CHK_SET_ERR( result, "Failed to get shared proc tags" );
2003 
2004  // Get single-proc destination handles and shared procs
2005  std::vector< int > sharing_procs( from_range.size() );
2006  result = mbImpl->tag_get_data( shh_tag, from_range, to_vec );MB_CHK_SET_ERR( result, "Failed to get shared handle tag for remote_handles" );
2007  result = mbImpl->tag_get_data( shp_tag, from_range, &sharing_procs[0] );MB_CHK_SET_ERR( result, "Failed to get sharing proc tag in remote_handles" );
2008  for( unsigned int j = 0; j < from_range.size(); j++ )
2009  {
2010  if( to_vec[j] && sharing_procs[j] != to_proc ) to_vec[j] = 0;
2011  }
2012 
2013  EntityHandle tmp_handles[MAX_SHARING_PROCS];
2014  int tmp_procs[MAX_SHARING_PROCS];
2015  // Go through results, and for 0-valued ones, look for multiple shared proc
2016  Range::iterator rit;
2017  unsigned int i;
2018  for( rit = from_range.begin(), i = 0; rit != from_range.end(); ++rit, i++ )
2019  {
2020  if( !to_vec[i] )
2021  {
2022  result = mbImpl->tag_get_data( shhs_tag, &( *rit ), 1, tmp_handles );
2023  if( MB_SUCCESS == result )
2024  {
2025  result = mbImpl->tag_get_data( shps_tag, &( *rit ), 1, tmp_procs );MB_CHK_SET_ERR( result, "Failed to get sharedps tag data" );
2026  for( int j = 0; j < MAX_SHARING_PROCS; j++ )
2027  if( tmp_procs[j] == to_proc )
2028  {
2029  to_vec[i] = tmp_handles[j];
2030  break;
2031  }
2032  }
2033 
2034  if( !to_vec[i] )
2035  {
2036  int j = std::lower_bound( new_ents.begin(), new_ents.end(), *rit ) - new_ents.begin();
2037  if( (int)new_ents.size() == j )
2038  {
2039  MB_SET_ERR( MB_FAILURE, "Failed to find new entity in send list" );
2040  }
2041  int err;
2042  to_vec[i] = CREATE_HANDLE( MBMAXTYPE, j, err );
2043  if( err )
2044  {
2045  MB_SET_ERR( MB_FAILURE, "Failed to create handle in remote_handles" );
2046  }
2047  }
2048  }
2049  }
2050  }
2051 
2052  return MB_SUCCESS;
2053 }

References moab::Range::begin(), moab::CREATE_HANDLE(), moab::Range::empty(), moab::Range::end(), ErrorCode, get_shared_proc_tags(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, mbImpl, MBMAXTYPE, moab::Range::size(), and moab::Interface::tag_get_data().

◆ get_remote_handles() [2/4]

ErrorCode moab::ParallelComm::get_remote_handles ( const bool  store_remote_handles,
const Range from_range,
Range to_range,
int  to_proc,
const std::vector< EntityHandle > &  new_ents 
)
private

same as other version, except from_range and to_range should be different here

Definition at line 2055 of file ParallelComm.cpp.

2060 {
2061  std::vector< EntityHandle > to_vector( from_range.size() );
2062 
2063  ErrorCode result = get_remote_handles( store_remote_handles, from_range, &to_vector[0], to_proc, new_ents );MB_CHK_SET_ERR( result, "Failed to get remote handles" );
2064  std::copy( to_vector.begin(), to_vector.end(), range_inserter( to_range ) );
2065  return result;
2066 }

References ErrorCode, get_remote_handles(), MB_CHK_SET_ERR, and moab::Range::size().

◆ get_remote_handles() [3/4]

ErrorCode moab::ParallelComm::get_remote_handles ( const bool  store_remote_handles,
EntityHandle from_vec,
EntityHandle to_vec_tmp,
int  num_ents,
int  to_proc,
const std::vector< EntityHandle > &  new_ents 
)
private

replace handles in from_vec with corresponding handles on to_proc (by checking shared[p/h]_tag and shared[p/h]s_tag; if no remote handle and new_ents is non-null, substitute instead CREATE_HANDLE(MBMAXTYPE, index) where index is handle's position in new_ents

Definition at line 1875 of file ParallelComm.cpp.

1881 {
1882  // NOTE: THIS IMPLEMENTATION IS JUST LIKE THE RANGE-BASED VERSION, NO REUSE
1883  // AT THIS TIME, SO IF YOU FIX A BUG IN THIS VERSION, IT MAY BE IN THE
1884  // OTHER VERSION TOO!!!
1885  if( 0 == num_ents ) return MB_SUCCESS;
1886 
1887  // Use a local destination ptr in case we're doing an in-place copy
1888  std::vector< EntityHandle > tmp_vector;
1889  EntityHandle* to_vec = to_vec_tmp;
1890  if( to_vec == from_vec )
1891  {
1892  tmp_vector.resize( num_ents );
1893  to_vec = &tmp_vector[0];
1894  }
1895 
1896  if( !store_remote_handles )
1897  {
1898  int err;
1899  // In this case, substitute position in new_ents list
1900  for( int i = 0; i < num_ents; i++ )
1901  {
1902  int ind = std::lower_bound( new_ents.begin(), new_ents.end(), from_vec[i] ) - new_ents.begin();
1903  assert( new_ents[ind] == from_vec[i] );
1904  to_vec[i] = CREATE_HANDLE( MBMAXTYPE, ind, err );
1905  assert( to_vec[i] != 0 && !err && -1 != ind );
1906  }
1907  }
1908  else
1909  {
1910  Tag shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag;
1911  ErrorCode result = get_shared_proc_tags( shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag );MB_CHK_SET_ERR( result, "Failed to get shared proc tags" );
1912 
1913  // Get single-proc destination handles and shared procs
1914  std::vector< int > sharing_procs( num_ents );
1915  result = mbImpl->tag_get_data( shh_tag, from_vec, num_ents, to_vec );MB_CHK_SET_ERR( result, "Failed to get shared handle tag for remote_handles" );
1916  result = mbImpl->tag_get_data( shp_tag, from_vec, num_ents, &sharing_procs[0] );MB_CHK_SET_ERR( result, "Failed to get sharing proc tag in remote_handles" );
1917  for( int j = 0; j < num_ents; j++ )
1918  {
1919  if( to_vec[j] && sharing_procs[j] != to_proc ) to_vec[j] = 0;
1920  }
1921 
1922  EntityHandle tmp_handles[MAX_SHARING_PROCS];
1923  int tmp_procs[MAX_SHARING_PROCS];
1924  int i;
1925  // Go through results, and for 0-valued ones, look for multiple shared proc
1926  for( i = 0; i < num_ents; i++ )
1927  {
1928  if( !to_vec[i] )
1929  {
1930  result = mbImpl->tag_get_data( shps_tag, from_vec + i, 1, tmp_procs );
1931  if( MB_SUCCESS == result )
1932  {
1933  for( int j = 0; j < MAX_SHARING_PROCS; j++ )
1934  {
1935  if( -1 == tmp_procs[j] )
1936  break;
1937  else if( tmp_procs[j] == to_proc )
1938  {
1939  result = mbImpl->tag_get_data( shhs_tag, from_vec + i, 1, tmp_handles );MB_CHK_SET_ERR( result, "Failed to get sharedhs tag data" );
1940  to_vec[i] = tmp_handles[j];
1941  assert( to_vec[i] );
1942  break;
1943  }
1944  }
1945  }
1946  if( !to_vec[i] )
1947  {
1948  int j = std::lower_bound( new_ents.begin(), new_ents.end(), from_vec[i] ) - new_ents.begin();
1949  if( (int)new_ents.size() == j )
1950  {
1951  std::cout << "Failed to find new entity in send list, proc " << procConfig.proc_rank()
1952  << std::endl;
1953  for( int k = 0; k <= num_ents; k++ )
1954  std::cout << k << ": " << from_vec[k] << " " << to_vec[k] << std::endl;
1955  MB_SET_ERR( MB_FAILURE, "Failed to find new entity in send list" );
1956  }
1957  int err;
1958  to_vec[i] = CREATE_HANDLE( MBMAXTYPE, j, err );
1959  if( err )
1960  {
1961  MB_SET_ERR( MB_FAILURE, "Failed to create handle in remote_handles" );
1962  }
1963  }
1964  }
1965  }
1966  }
1967 
1968  // memcpy over results if from_vec and to_vec are the same
1969  if( to_vec_tmp == from_vec ) memcpy( from_vec, to_vec, num_ents * sizeof( EntityHandle ) );
1970 
1971  return MB_SUCCESS;
1972 }

References moab::CREATE_HANDLE(), ErrorCode, get_shared_proc_tags(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, mbImpl, MBMAXTYPE, moab::ProcConfig::proc_rank(), procConfig, and moab::Interface::tag_get_data().

◆ get_remote_handles() [4/4]

ErrorCode moab::ParallelComm::get_remote_handles ( EntityHandle local_vec,
EntityHandle rem_vec,
int  num_ents,
int  to_proc 
)

Definition at line 1064 of file ParallelComm.cpp.

1065 {
1066  ErrorCode error;
1067  std::vector< EntityHandle > newents;
1068  error = get_remote_handles( true, local_vec, rem_vec, num_ents, to_proc, newents );MB_CHK_ERR( error );
1069 
1070  return MB_SUCCESS;
1071 }

References moab::error(), ErrorCode, MB_CHK_ERR, and MB_SUCCESS.

Referenced by check_my_shared_handles(), get_remote_handles(), pack_entity_seq(), pack_sets(), pack_tag(), and settle_intersection_points().

◆ get_sent_ents()

ErrorCode moab::ParallelComm::get_sent_ents ( const bool  is_iface,
const int  bridge_dim,
const int  ghost_dim,
const int  num_layers,
const int  addl_ents,
Range sent_ents,
Range allsent,
TupleList entprocs 
)
private

Definition at line 6518 of file ParallelComm.cpp.

6526 {
6527  ErrorCode result;
6528  unsigned int ind;
6529  std::vector< unsigned int >::iterator proc_it;
6530  Range tmp_range;
6531 
6532  // Done in a separate loop over procs because sometimes later procs
6533  // need to add info to earlier procs' messages
6534  for( ind = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, ind++ )
6535  {
6536  if( !is_iface )
6537  {
6538  result =
6539  get_ghosted_entities( bridge_dim, ghost_dim, buffProcs[ind], num_layers, addl_ents, sent_ents[ind] );MB_CHK_SET_ERR( result, "Failed to get ghost layers" );
6540  }
6541  else
6542  {
6543  result = get_iface_entities( buffProcs[ind], -1, sent_ents[ind] );MB_CHK_SET_ERR( result, "Failed to get interface layers" );
6544  }
6545 
6546  // Filter out entities already shared with destination
6547  tmp_range.clear();
6548  result = filter_pstatus( sent_ents[ind], PSTATUS_SHARED, PSTATUS_AND, buffProcs[ind], &tmp_range );MB_CHK_SET_ERR( result, "Failed to filter on owner" );
6549  if( !tmp_range.empty() ) sent_ents[ind] = subtract( sent_ents[ind], tmp_range );
6550 
6551  allsent.merge( sent_ents[ind] );
6552  }
6553 
6554  //===========================================
6555  // Need to get procs each entity is sent to
6556  //===========================================
6557 
6558  // Get the total # of proc/handle pairs
6559  int npairs = 0;
6560  for( ind = 0; ind < buffProcs.size(); ind++ )
6561  npairs += sent_ents[ind].size();
6562 
6563  // Allocate a TupleList of that size
6564  entprocs.initialize( 1, 0, 1, 0, npairs );
6565  entprocs.enableWriteAccess();
6566 
6567  // Put the proc/handle pairs in the list
6568  for( ind = 0, proc_it = buffProcs.begin(); proc_it != buffProcs.end(); ++proc_it, ind++ )
6569  {
6570  for( Range::iterator rit = sent_ents[ind].begin(); rit != sent_ents[ind].end(); ++rit )
6571  {
6572  entprocs.vi_wr[entprocs.get_n()] = *proc_it;
6573  entprocs.vul_wr[entprocs.get_n()] = *rit;
6574  entprocs.inc_n();
6575  }
6576  }
6577  // Sort by handle
6578  moab::TupleList::buffer sort_buffer;
6579  sort_buffer.buffer_init( npairs );
6580  entprocs.sort( 1, &sort_buffer );
6581 
6582  entprocs.disableWriteAccess();
6583  sort_buffer.reset();
6584 
6585  return MB_SUCCESS;
6586 }

References buffProcs, moab::Range::clear(), moab::TupleList::disableWriteAccess(), moab::Range::empty(), moab::TupleList::enableWriteAccess(), moab::Range::end(), ErrorCode, filter_pstatus(), get_ghosted_entities(), get_iface_entities(), moab::TupleList::get_n(), moab::TupleList::inc_n(), moab::TupleList::initialize(), MB_CHK_SET_ERR, MB_SUCCESS, moab::Range::merge(), PSTATUS_AND, PSTATUS_SHARED, moab::TupleList::buffer::reset(), size(), moab::TupleList::sort(), moab::subtract(), moab::TupleList::vi_wr, and moab::TupleList::vul_wr.

Referenced by exchange_ghost_cells().

◆ get_shared_entities()

ErrorCode moab::ParallelComm::get_shared_entities ( int  other_proc,
Range shared_ents,
int  dim = -1,
const bool  iface = false,
const bool  owned_filter = false 
)

Get shared entities of specified dimension If other_proc is -1, any shared entities are returned. If dim is -1, entities of all dimensions on interface are returned.

Parameters
other_procRank of processor for which interface entities are requested
shared_entsEntities returned from function
dimDimension of interface entities requested
ifaceIf true, return only entities on the interface
owned_filterIf true, return only owned shared entities

Definition at line 8801 of file ParallelComm.cpp.

8806 {
8807  shared_ents.clear();
8808  ErrorCode result = MB_SUCCESS;
8809 
8810  // Dimension
8811  if( -1 != dim )
8812  {
8814  Range dum_range;
8815  std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( dum_range ) );
8816  shared_ents.merge( dum_range.lower_bound( dp.first ), dum_range.upper_bound( dp.second ) );
8817  }
8818  else
8819  std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( shared_ents ) );
8820 
8821  // Filter by iface
8822  if( iface )
8823  {
8824  result = filter_pstatus( shared_ents, PSTATUS_INTERFACE, PSTATUS_AND );MB_CHK_SET_ERR( result, "Failed to filter by iface" );
8825  }
8826 
8827  // Filter by owned
8828  if( owned_filter )
8829  {
8830  result = filter_pstatus( shared_ents, PSTATUS_NOT_OWNED, PSTATUS_NOT );MB_CHK_SET_ERR( result, "Failed to filter by owned" );
8831  }
8832 
8833  // Filter by proc
8834  if( -1 != other_proc )
8835  {
8836  result = filter_pstatus( shared_ents, PSTATUS_SHARED, PSTATUS_AND, other_proc );MB_CHK_SET_ERR( result, "Failed to filter by proc" );
8837  }
8838 
8839  return result;
8840 }

References moab::Range::clear(), dim, ErrorCode, filter_pstatus(), iface, moab::Range::lower_bound(), MB_CHK_SET_ERR, MB_SUCCESS, moab::Range::merge(), PSTATUS_AND, PSTATUS_INTERFACE, PSTATUS_NOT, PSTATUS_NOT_OWNED, PSTATUS_SHARED, sharedEnts, moab::CN::TypeDimensionMap, and moab::Range::upper_bound().

Referenced by check_my_shared_handles(), moab::ParCommGraph::compute_partition(), main(), perform_laplacian_smoothing(), and perform_lloyd_relaxation().

◆ get_shared_proc_tags()

ErrorCode moab::ParallelComm::get_shared_proc_tags ( Tag sharedp_tag,
Tag sharedps_tag,
Tag sharedh_tag,
Tag sharedhs_tag,
Tag pstatus_tag 
)
inline

return the tags used to indicate shared procs and handles

Definition at line 1574 of file ParallelComm.hpp.

1579 {
1580  sharedp = sharedp_tag();
1581  sharedps = sharedps_tag();
1582  sharedh = sharedh_tag();
1583  sharedhs = sharedhs_tag();
1584  pstatus = pstatus_tag();
1585 
1586  return MB_SUCCESS;
1587 }

References MB_SUCCESS, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), and sharedps_tag().

Referenced by create_interface_sets(), get_remote_handles(), resolve_shared_ents(), and tag_shared_verts().

◆ get_shared_sets()

ErrorCode moab::ParallelComm::get_shared_sets ( Range result) const

Get all shared sets.

Definition at line 8899 of file ParallelComm.cpp.

8900 {
8901  return sharedSetData->get_shared_sets( result );
8902 }

References moab::SharedSetData::get_shared_sets(), and sharedSetData.

Referenced by moab::WriteHDF5Parallel::create_meshset_tables().

◆ get_sharing_data() [1/4]

ErrorCode moab::ParallelComm::get_sharing_data ( const EntityHandle entities,
int  num_entities,
std::set< int > &  procs,
int  op = Interface::INTERSECT 
)
inline

Get the intersection or union of all sharing processors Get the intersection or union of all sharing processors. Processor set is cleared as part of this function.

Parameters
entitiesEntity list ptr
num_entitiesNumber of entities
procsProcessors returned
opEither Interface::UNION or Interface::INTERSECT

Definition at line 1673 of file ParallelComm.hpp.

1677 {
1678  Range dum_range;
1679  // cast away constness 'cuz the range is passed as const
1680  EntityHandle* ents_cast = const_cast< EntityHandle* >( entities );
1681  std::copy( ents_cast, ents_cast + num_entities, range_inserter( dum_range ) );
1682  return get_sharing_data( dum_range, procs, op );
1683 }

References entities, and get_sharing_data().

◆ get_sharing_data() [2/4]

ErrorCode moab::ParallelComm::get_sharing_data ( const EntityHandle  entity,
int *  ps,
EntityHandle hs,
unsigned char &  pstat,
int &  num_ps 
)
inline

Get the shared processors/handles for an entity Same as other version but with int num_ps.

Parameters
entityEntity being queried
psPointer to sharing proc data
hsPointer to shared proc handle data
pstatReference to pstatus data returned from this function

Definition at line 1685 of file ParallelComm.hpp.

1690 {
1691  unsigned int dum_ps;
1692  ErrorCode result = get_sharing_data( entity, ps, hs, pstat, dum_ps );
1693  if( MB_SUCCESS == result ) num_ps = dum_ps;
1694  return result;
1695 }

References ErrorCode, get_sharing_data(), and MB_SUCCESS.

◆ get_sharing_data() [3/4]

ErrorCode moab::ParallelComm::get_sharing_data ( const EntityHandle  entity,
int *  ps,
EntityHandle hs,
unsigned char &  pstat,
unsigned int &  num_ps 
)

Get the shared processors/handles for an entity Get the shared processors/handles for an entity. Arrays must be large enough to receive data for all sharing procs. Does not include this proc if only shared with one other proc.

Parameters
entityEntity being queried
psPointer to sharing proc data
hsPointer to shared proc handle data
pstatReference to pstatus data returned from this function

Definition at line 3007 of file ParallelComm.cpp.

3012 {
3013  ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), &entity, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" );
3014  if( pstat & PSTATUS_MULTISHARED )
3015  {
3016  result = mbImpl->tag_get_data( sharedps_tag(), &entity, 1, ps );MB_CHK_SET_ERR( result, "Failed to get sharedps tag data" );
3017  if( hs )
3018  {
3019  result = mbImpl->tag_get_data( sharedhs_tag(), &entity, 1, hs );MB_CHK_SET_ERR( result, "Failed to get sharedhs tag data" );
3020  }
3021  num_ps = std::find( ps, ps + MAX_SHARING_PROCS, -1 ) - ps;
3022  }
3023  else if( pstat & PSTATUS_SHARED )
3024  {
3025  result = mbImpl->tag_get_data( sharedp_tag(), &entity, 1, ps );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" );
3026  if( hs )
3027  {
3028  result = mbImpl->tag_get_data( sharedh_tag(), &entity, 1, hs );MB_CHK_SET_ERR( result, "Failed to get sharedh tag data" );
3029  hs[1] = 0;
3030  }
3031  // Initialize past end of data
3032  ps[1] = -1;
3033  num_ps = 1;
3034  }
3035  else
3036  {
3037  ps[0] = -1;
3038  if( hs ) hs[0] = 0;
3039  num_ps = 0;
3040  }
3041 
3042  assert( MAX_SHARING_PROCS >= num_ps );
3043 
3044  return MB_SUCCESS;
3045 }

References ErrorCode, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, PSTATUS_MULTISHARED, PSTATUS_SHARED, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), and moab::Interface::tag_get_data().

Referenced by augment_default_sets_with_ghosts(), build_sharedhps_list(), check_clean_iface(), check_local_shared(), moab::ParCommGraph::compute_partition(), correct_thin_ghost_layers(), create_interface_sets(), delete_entities(), exchange_owned_meshs(), get_interface_sets(), get_part_neighbor_ids(), get_proc_nvecs(), get_sharing_data(), list_entities(), pack_shared_handles(), update_remote_data(), and update_remote_data_old().

◆ get_sharing_data() [4/4]

ErrorCode moab::ParallelComm::get_sharing_data ( const Range entities,
std::set< int > &  procs,
int  op = Interface::INTERSECT 
)

Get the intersection or union of all sharing processors Same as previous variant but with range as input.

Definition at line 2960 of file ParallelComm.cpp.

2961 {
2962  // Get the union or intersection of sharing data for multiple entities
2963  ErrorCode result;
2964  int sp2[MAX_SHARING_PROCS];
2965  int num_ps;
2966  unsigned char pstat;
2967  std::set< int > tmp_procs;
2968  procs.clear();
2969 
2970  for( Range::const_iterator rit = entities.begin(); rit != entities.end(); ++rit )
2971  {
2972  // Get sharing procs
2973  result = get_sharing_data( *rit, sp2, NULL, pstat, num_ps );MB_CHK_SET_ERR( result, "Failed to get sharing data in get_sharing_data" );
2974  if( !( pstat & PSTATUS_SHARED ) && Interface::INTERSECT == operation )
2975  {
2976  procs.clear();
2977  return MB_SUCCESS;
2978  }
2979 
2980  if( rit == entities.begin() )
2981  {
2982  std::copy( sp2, sp2 + num_ps, std::inserter( procs, procs.begin() ) );
2983  }
2984  else
2985  {
2986  std::sort( sp2, sp2 + num_ps );
2987  tmp_procs.clear();
2988  if( Interface::UNION == operation )
2989  std::set_union( procs.begin(), procs.end(), sp2, sp2 + num_ps,
2990  std::inserter( tmp_procs, tmp_procs.end() ) );
2991  else if( Interface::INTERSECT == operation )
2992  std::set_intersection( procs.begin(), procs.end(), sp2, sp2 + num_ps,
2993  std::inserter( tmp_procs, tmp_procs.end() ) );
2994  else
2995  {
2996  assert( "Unknown operation." && false );
2997  return MB_FAILURE;
2998  }
2999  procs.swap( tmp_procs );
3000  }
3001  if( Interface::INTERSECT == operation && procs.empty() ) return MB_SUCCESS;
3002  }
3003 
3004  return MB_SUCCESS;
3005 }

References entities, ErrorCode, get_sharing_data(), moab::Interface::INTERSECT, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, PSTATUS_SHARED, and moab::Interface::UNION.

◆ get_sharing_parts()

ErrorCode moab::ParallelComm::get_sharing_parts ( EntityHandle  entity,
int  part_ids_out[MAX_SHARING_PROCS],
int &  num_part_ids_out,
EntityHandle  remote_handles[MAX_SHARING_PROCS] = 0 
)

Definition at line 8382 of file ParallelComm.cpp.

8386 {
8387  // FIXME : assumes one part per proc, and therefore part_id == rank
8388 
8389  // If entity is not shared, then we're the owner.
8390  unsigned char pstat;
8391  ErrorCode result = mbImpl->tag_get_data( pstatus_tag(), &entity, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" );
8392  if( !( pstat & PSTATUS_SHARED ) )
8393  {
8394  part_ids_out[0] = proc_config().proc_rank();
8395  if( remote_handles ) remote_handles[0] = entity;
8396  num_part_ids_out = 1;
8397  return MB_SUCCESS;
8398  }
8399 
8400  // If entity is shared with one other proc, then
8401  // sharedp_tag will contain a positive value.
8402  result = mbImpl->tag_get_data( sharedp_tag(), &entity, 1, part_ids_out );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" );
8403  if( part_ids_out[0] != -1 )
8404  {
8405  num_part_ids_out = 2;
8406  part_ids_out[1] = proc_config().proc_rank();
8407 
8408  // Done?
8409  if( !remote_handles ) return MB_SUCCESS;
8410 
8411  // Get handles on remote processors (and this one)
8412  remote_handles[1] = entity;
8413  return mbImpl->tag_get_data( sharedh_tag(), &entity, 1, remote_handles );
8414  }
8415 
8416  // If here, then the entity is shared with at least two other processors.
8417  // Get the list from the sharedps_tag
8418  result = mbImpl->tag_get_data( sharedps_tag(), &entity, 1, part_ids_out );
8419  if( MB_SUCCESS != result ) return result;
8420  // Count number of valid (positive) entries in sharedps_tag
8421  for( num_part_ids_out = 0; num_part_ids_out < MAX_SHARING_PROCS && part_ids_out[num_part_ids_out] >= 0;
8422  num_part_ids_out++ )
8423  ;
8424  // part_ids_out[num_part_ids_out++] = proc_config().proc_rank();
8425 #ifndef NDEBUG
8426  int my_idx = std::find( part_ids_out, part_ids_out + num_part_ids_out, proc_config().proc_rank() ) - part_ids_out;
8427  assert( my_idx < num_part_ids_out );
8428 #endif
8429 
8430  // Done?
8431  if( !remote_handles ) return MB_SUCCESS;
8432 
8433  // Get remote handles
8434  result = mbImpl->tag_get_data( sharedhs_tag(), &entity, 1, remote_handles );
8435  // remote_handles[num_part_ids_out - 1] = entity;
8436  assert( remote_handles[my_idx] == entity );
8437 
8438  return result;
8439 }

References ErrorCode, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_rank(), PSTATUS_SHARED, pstatus_tag(), sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), and moab::Interface::tag_get_data().

◆ get_tag_send_list()

ErrorCode moab::ParallelComm::get_tag_send_list ( const Range all_entities,
std::vector< Tag > &  all_tags,
std::vector< Range > &  tag_ranges 
)
private

Get list of tags for which to exchange data.

Get tags and entities for which to exchange tag data. This function was originally part of 'pack_tags' requested with the 'all_possible_tags' parameter.

Parameters
all_entitiesInput. The set of entities for which data is to be communicated.
all_tagsOutput. Populated with the handles of tags to be sent.
tag_rangesOutput. For each corresponding tag in all_tags, the subset of 'all_entities' for which a tag value has been set.

Definition at line 3650 of file ParallelComm.cpp.

3653 {
3654  std::vector< Tag > tmp_tags;
3655  ErrorCode result = mbImpl->tag_get_tags( tmp_tags );MB_CHK_SET_ERR( result, "Failed to get tags in pack_tags" );
3656 
3657  std::vector< Tag >::iterator tag_it;
3658  for( tag_it = tmp_tags.begin(); tag_it != tmp_tags.end(); ++tag_it )
3659  {
3660  std::string tag_name;
3661  result = mbImpl->tag_get_name( *tag_it, tag_name );
3662  if( tag_name.c_str()[0] == '_' && tag_name.c_str()[1] == '_' ) continue;
3663 
3664  Range tmp_range;
3665  result = ( *tag_it )->get_tagged_entities( sequenceManager, tmp_range );MB_CHK_SET_ERR( result, "Failed to get entities for tag in pack_tags" );
3666  tmp_range = intersect( tmp_range, whole_range );
3667 
3668  if( tmp_range.empty() ) continue;
3669 
3670  // OK, we'll be sending this tag
3671  all_tags.push_back( *tag_it );
3672  tag_ranges.push_back( Range() );
3673  tag_ranges.back().swap( tmp_range );
3674  }
3675 
3676  return MB_SUCCESS;
3677 }

References moab::Range::empty(), ErrorCode, moab::intersect(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, sequenceManager, moab::Interface::tag_get_name(), and moab::Interface::tag_get_tags().

Referenced by pack_buffer().

◆ initialize()

void moab::ParallelComm::initialize ( )
private

Definition at line 341 of file ParallelComm.cpp.

342 {
343  Core* core = dynamic_cast< Core* >( mbImpl );
344  sequenceManager = core->sequence_manager();
346 
347  // Initialize MPI, if necessary
348  int flag = 1;
349  int retval = MPI_Initialized( &flag );
350  if( MPI_SUCCESS != retval || !flag )
351  {
352  int argc = 0;
353  char** argv = NULL;
354 
355  // mpi not initialized yet - initialize here
356  retval = MPI_Init( &argc, &argv );
357  assert( MPI_SUCCESS == retval );
358  }
359 
360  // Reserve space for vectors
361  buffProcs.reserve( MAX_SHARING_PROCS );
364 
365  pcommID = add_pcomm( this );
366 
367  if( !myDebug )
368  {
369  myDebug = new DebugOutput( "ParallelComm", std::cerr );
371  }
372 }

References add_pcomm(), buffProcs, errorHandler, localOwnedBuffs, MAX_SHARING_PROCS, mbImpl, myDebug, pcommID, moab::ProcConfig::proc_rank(), procConfig, moab::Interface::query_interface(), remoteOwnedBuffs, moab::Core::sequence_manager(), sequenceManager, and moab::DebugOutput::set_rank().

Referenced by ParallelComm().

◆ interface_sets() [1/2]

Range& moab::ParallelComm::interface_sets ( )
inline

Definition at line 673 of file ParallelComm.hpp.

674  {
675  return interfaceSets;
676  }

References interfaceSets.

Referenced by check_clean_iface(), moab::NCHelperScrip::create_mesh(), and get_interface_sets().

◆ interface_sets() [2/2]

const Range& moab::ParallelComm::interface_sets ( ) const
inline

Definition at line 677 of file ParallelComm.hpp.

678  {
679  return interfaceSets;
680  }

References interfaceSets.

◆ is_iface_proc()

bool moab::ParallelComm::is_iface_proc ( EntityHandle  this_set,
int  to_proc 
)
private

returns true if the set is an interface shared with to_proc

Definition at line 5556 of file ParallelComm.cpp.

5557 {
5558  int sharing_procs[MAX_SHARING_PROCS];
5559  std::fill( sharing_procs, sharing_procs + MAX_SHARING_PROCS, -1 );
5560  ErrorCode result = mbImpl->tag_get_data( sharedp_tag(), &this_set, 1, sharing_procs );
5561  if( MB_SUCCESS == result && to_proc == sharing_procs[0] ) return true;
5562 
5563  result = mbImpl->tag_get_data( sharedps_tag(), &this_set, 1, sharing_procs );
5564  if( MB_SUCCESS != result ) return false;
5565 
5566  for( int i = 0; i < MAX_SHARING_PROCS; i++ )
5567  {
5568  if( to_proc == sharing_procs[i] )
5569  return true;
5570  else if( -1 == sharing_procs[i] )
5571  return false;
5572  }
5573 
5574  return false;
5575 }

References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, mbImpl, sharedp_tag(), sharedps_tag(), and moab::Interface::tag_get_data().

Referenced by get_ghosted_entities(), and get_iface_entities().

◆ list_entities() [1/2]

ErrorCode moab::ParallelComm::list_entities ( const EntityHandle ents,
int  num_ents 
)

Definition at line 2573 of file ParallelComm.cpp.

2574 {
2575  if( NULL == ents )
2576  {
2577  Range shared_ents;
2578  std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( shared_ents ) );
2579  shared_ents.print( "Shared entities:\n" );
2580  return MB_SUCCESS;
2581  }
2582 
2583  unsigned char pstat;
2584  EntityHandle tmp_handles[MAX_SHARING_PROCS];
2585  int tmp_procs[MAX_SHARING_PROCS];
2586  unsigned int num_ps;
2587  ErrorCode result;
2588 
2589  for( int i = 0; i < num_ents; i++ )
2590  {
2591  result = mbImpl->list_entities( ents + i, 1 );MB_CHK_ERR( result );
2592  double coords[3];
2593  result = mbImpl->get_coords( ents + i, 1, coords );
2594  std::cout << " coords: " << coords[0] << " " << coords[1] << " " << coords[2] << "\n";
2595 
2596  result = get_sharing_data( ents[i], tmp_procs, tmp_handles, pstat, num_ps );MB_CHK_SET_ERR( result, "Failed to get sharing data" );
2597 
2598  std::cout << "Pstatus: ";
2599  if( !num_ps )
2600  std::cout << "local " << std::endl;
2601  else
2602  {
2603  if( pstat & PSTATUS_NOT_OWNED ) std::cout << "NOT_OWNED; ";
2604  if( pstat & PSTATUS_SHARED ) std::cout << "SHARED; ";
2605  if( pstat & PSTATUS_MULTISHARED ) std::cout << "MULTISHARED; ";
2606  if( pstat & PSTATUS_INTERFACE ) std::cout << "INTERFACE; ";
2607  if( pstat & PSTATUS_GHOST ) std::cout << "GHOST; ";
2608  std::cout << std::endl;
2609  for( unsigned int j = 0; j < num_ps; j++ )
2610  {
2611  std::cout << " proc " << tmp_procs[j] << " id (handle) " << mbImpl->id_from_handle( tmp_handles[j] )
2612  << "(" << tmp_handles[j] << ")" << std::endl;
2613  }
2614  }
2615  std::cout << std::endl;
2616  }
2617 
2618  return MB_SUCCESS;
2619 }

References ErrorCode, moab::Interface::get_coords(), get_sharing_data(), moab::Interface::id_from_handle(), moab::Interface::list_entities(), MAX_SHARING_PROCS, MB_CHK_ERR, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::Range::print(), PSTATUS_GHOST, PSTATUS_INTERFACE, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, and sharedEnts.

Referenced by build_sharedhps_list(), check_local_shared(), check_my_shared_handles(), list_entities(), moab::ReadParallel::load_file(), and moab::ScdInterface::tag_shared_vertices().

◆ list_entities() [2/2]

ErrorCode moab::ParallelComm::list_entities ( const Range ents)

Definition at line 2621 of file ParallelComm.cpp.

2622 {
2623  for( Range::iterator rit = ents.begin(); rit != ents.end(); ++rit )
2624  list_entities( &( *rit ), 1 );
2625 
2626  return MB_SUCCESS;
2627 }

References moab::Range::begin(), moab::Range::end(), list_entities(), and MB_SUCCESS.

◆ pack_adjacencies()

ErrorCode moab::ParallelComm::pack_adjacencies ( Range entities,
Range::const_iterator start_rit,
Range whole_range,
unsigned char *&  buff_ptr,
int &  count,
const bool  just_count,
const bool  store_handles,
const int  to_proc 
)
private

Definition at line 3450 of file ParallelComm.cpp.

3458 {
3459  return MB_FAILURE;
3460 }

◆ pack_buffer()

ErrorCode moab::ParallelComm::pack_buffer ( Range orig_ents,
const bool  adjacencies,
const bool  tags,
const bool  store_remote_handles,
const int  to_proc,
Buffer buff,
TupleList entprocs = NULL,
Range allsent = NULL 
)

public 'cuz we want to unit test these externally

Definition at line 1418 of file ParallelComm.cpp.

1426 {
1427  // Pack the buffer with the entity ranges, adjacencies, and tags sections
1428  //
1429  // Note: new entities used in subsequent connectivity lists, sets, or tags,
1430  // are referred to as (MBMAXTYPE + index), where index is into vector
1431  // of new entities, 0-based
1432  ErrorCode result;
1433 
1434  Range set_range;
1435  std::vector< Tag > all_tags;
1436  std::vector< Range > tag_ranges;
1437 
1438  Range::const_iterator rit;
1439 
1440  // Entities
1441  result = pack_entities( orig_ents, buff, store_remote_handles, to_proc, false, entprocs, allsent );MB_CHK_SET_ERR( result, "Packing entities failed" );
1442 
1443  // Sets
1444  result = pack_sets( orig_ents, buff, store_remote_handles, to_proc );MB_CHK_SET_ERR( result, "Packing sets (count) failed" );
1445 
1446  // Tags
1447  Range final_ents;
1448  if( tags )
1449  {
1450  result = get_tag_send_list( orig_ents, all_tags, tag_ranges );MB_CHK_SET_ERR( result, "Failed to get tagged entities" );
1451  result = pack_tags( orig_ents, all_tags, all_tags, tag_ranges, buff, store_remote_handles, to_proc );MB_CHK_SET_ERR( result, "Packing tags (count) failed" );
1452  }
1453  else
1454  { // Set tag size to 0
1455  buff->check_space( sizeof( int ) );
1456  PACK_INT( buff->buff_ptr, 0 );
1457  buff->set_stored_size();
1458  }
1459 
1460  return result;
1461 }

References moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), ErrorCode, get_tag_send_list(), MB_CHK_SET_ERR, pack_entities(), moab::PACK_INT(), pack_sets(), pack_tags(), and moab::ParallelComm::Buffer::set_stored_size().

Referenced by broadcast_entities(), exchange_owned_mesh(), scatter_entities(), send_entities(), and moab::ParCommGraph::send_mesh_parts().

◆ pack_entities()

ErrorCode moab::ParallelComm::pack_entities ( Range entities,
Buffer buff,
const bool  store_remote_handles,
const int  to_proc,
const bool  is_iface,
TupleList entprocs = NULL,
Range allsent = NULL 
)

Definition at line 1582 of file ParallelComm.cpp.

1589 {
1590  // Packed information:
1591  // 1. # entities = E
1592  // 2. for e in E
1593  // a. # procs sharing e, incl. sender and receiver = P
1594  // b. for p in P (procs sharing e)
1595  // c. for p in P (handle for e on p) (Note1)
1596  // 3. vertex/entity info
1597 
1598  // Get an estimate of the buffer size & pre-allocate buffer size
1599  int buff_size = estimate_ents_buffer_size( entities, store_remote_handles );
1600  if( buff_size < 0 ) MB_SET_ERR( MB_FAILURE, "Failed to estimate ents buffer size" );
1601  buff->check_space( buff_size );
1602  myDebug->tprintf( 3, "estimate buffer size for %d entities: %d \n", (int)entities.size(), buff_size );
1603 
1604  unsigned int num_ents;
1605  ErrorCode result;
1606 
1607  std::vector< EntityHandle > entities_vec( entities.size() );
1608  std::copy( entities.begin(), entities.end(), entities_vec.begin() );
1609 
1610  // First pack procs/handles sharing this ent, not including this dest but including
1611  // others (with zero handles)
1612  if( store_remote_handles )
1613  {
1614  // Buff space is at least proc + handle for each entity; use avg of 4 other procs
1615  // to estimate buff size, but check later
1616  buff->check_space( sizeof( int ) + ( 5 * sizeof( int ) + sizeof( EntityHandle ) ) * entities.size() );
1617 
1618  // 1. # entities = E
1619  PACK_INT( buff->buff_ptr, entities.size() );
1620 
1621  Range::iterator rit;
1622 
1623  // Pre-fetch sharedp and pstatus
1624  std::vector< int > sharedp_vals( entities.size() );
1625  result = mbImpl->tag_get_data( sharedp_tag(), entities, &sharedp_vals[0] );MB_CHK_SET_ERR( result, "Failed to get sharedp tag data" );
1626  std::vector< char > pstatus_vals( entities.size() );
1627  result = mbImpl->tag_get_data( pstatus_tag(), entities, &pstatus_vals[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" );
1628 
1629  unsigned int i;
1630  int tmp_procs[MAX_SHARING_PROCS];
1631  EntityHandle tmp_handles[MAX_SHARING_PROCS];
1632  std::set< unsigned int > dumprocs;
1633 
1634  // 2. for e in E
1635  for( rit = entities.begin(), i = 0; rit != entities.end(); ++rit, i++ )
1636  {
1637  unsigned int ind =
1638  std::lower_bound( entprocs->vul_rd, entprocs->vul_rd + entprocs->get_n(), *rit ) - entprocs->vul_rd;
1639  assert( ind < entprocs->get_n() );
1640 
1641  while( ind < entprocs->get_n() && entprocs->vul_rd[ind] == *rit )
1642  dumprocs.insert( entprocs->vi_rd[ind++] );
1643 
1644  result = build_sharedhps_list( *rit, pstatus_vals[i], sharedp_vals[i], dumprocs, num_ents, tmp_procs,
1645  tmp_handles );MB_CHK_SET_ERR( result, "Failed to build sharedhps" );
1646 
1647  dumprocs.clear();
1648 
1649  // Now pack them
1650  buff->check_space( ( num_ents + 1 ) * sizeof( int ) + num_ents * sizeof( EntityHandle ) );
1651  PACK_INT( buff->buff_ptr, num_ents );
1652  PACK_INTS( buff->buff_ptr, tmp_procs, num_ents );
1653  PACK_EH( buff->buff_ptr, tmp_handles, num_ents );
1654 
1655 #ifndef NDEBUG
1656  // Check for duplicates in proc list
1657  unsigned int dp = 0;
1658  for( ; dp < MAX_SHARING_PROCS && -1 != tmp_procs[dp]; dp++ )
1659  dumprocs.insert( tmp_procs[dp] );
1660  assert( dumprocs.size() == dp );
1661  dumprocs.clear();
1662 #endif
1663  }
1664  }
1665 
1666  // Pack vertices
1667  Range these_ents = entities.subset_by_type( MBVERTEX );
1668  num_ents = these_ents.size();
1669 
1670  if( num_ents )
1671  {
1672  buff_size = 2 * sizeof( int ) + 3 * num_ents * sizeof( double );
1673  buff->check_space( buff_size );
1674 
1675  // Type, # ents
1676  PACK_INT( buff->buff_ptr, ( (int)MBVERTEX ) );
1677  PACK_INT( buff->buff_ptr, ( (int)num_ents ) );
1678 
1679  std::vector< double > tmp_coords( 3 * num_ents );
1680  result = mbImpl->get_coords( these_ents, &tmp_coords[0] );MB_CHK_SET_ERR( result, "Failed to get vertex coordinates" );
1681  PACK_DBLS( buff->buff_ptr, &tmp_coords[0], 3 * num_ents );
1682 
1683  myDebug->tprintf( 4, "Packed %lu ents of type %s\n", (unsigned long)these_ents.size(),
1684  CN::EntityTypeName( TYPE_FROM_HANDLE( *these_ents.begin() ) ) );
1685  }
1686 
1687  // Now entities; go through range, packing by type and equal # verts per element
1688  Range::iterator start_rit = entities.find( *these_ents.rbegin() );
1689  ++start_rit;
1690  int last_nodes = -1;
1691  EntityType last_type = MBMAXTYPE;
1692  these_ents.clear();
1693  Range::iterator end_rit = start_rit;
1694  EntitySequence* seq;
1695  ElementSequence* eseq;
1696 
1697  while( start_rit != entities.end() || !these_ents.empty() )
1698  {
1699  // Cases:
1700  // A: !end, last_type == MBMAXTYPE, seq: save contig sequence in these_ents
1701  // B: !end, last type & nodes same, seq: save contig sequence in these_ents
1702  // C: !end, last type & nodes different: pack these_ents, then save contig sequence in
1703  // these_ents D: end: pack these_ents
1704 
1705  // Find the sequence holding current start entity, if we're not at end
1706  eseq = NULL;
1707  if( start_rit != entities.end() )
1708  {
1709  result = sequenceManager->find( *start_rit, seq );MB_CHK_SET_ERR( result, "Failed to find entity sequence" );
1710  if( NULL == seq ) return MB_FAILURE;
1711  eseq = dynamic_cast< ElementSequence* >( seq );
1712  }
1713 
1714  // Pack the last batch if at end or next one is different
1715  if( !these_ents.empty() &&
1716  ( !eseq || eseq->type() != last_type || last_nodes != (int)eseq->nodes_per_element() ) )
1717  {
1718  result = pack_entity_seq( last_nodes, store_remote_handles, to_proc, these_ents, entities_vec, buff );MB_CHK_SET_ERR( result, "Failed to pack entities from a sequence" );
1719  these_ents.clear();
1720  }
1721 
1722  if( eseq )
1723  {
1724  // Continuation of current range, just save these entities
1725  // Get position in entities list one past end of this sequence
1726  end_rit = entities.lower_bound( start_rit, entities.end(), eseq->end_handle() + 1 );
1727 
1728  // Put these entities in the range
1729  std::copy( start_rit, end_rit, range_inserter( these_ents ) );
1730 
1731  last_type = eseq->type();
1732  last_nodes = eseq->nodes_per_element();
1733  }
1734  else if( start_rit != entities.end() && TYPE_FROM_HANDLE( *start_rit ) == MBENTITYSET )
1735  break;
1736 
1737  start_rit = end_rit;
1738  }
1739 
1740  // Pack MBMAXTYPE to indicate end of ranges
1741  buff->check_space( sizeof( int ) );
1742  PACK_INT( buff->buff_ptr, ( (int)MBMAXTYPE ) );
1743 
1744  buff->set_stored_size();
1745  return MB_SUCCESS;
1746 }

References moab::Range::begin(), moab::ParallelComm::Buffer::buff_ptr, build_sharedhps_list(), moab::ParallelComm::Buffer::check_space(), moab::Range::clear(), moab::Range::empty(), moab::EntitySequence::end_handle(), entities, moab::CN::EntityTypeName(), ErrorCode, estimate_ents_buffer_size(), moab::SequenceManager::find(), moab::Interface::get_coords(), moab::TupleList::get_n(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, MBENTITYSET, mbImpl, MBMAXTYPE, MBVERTEX, myDebug, moab::ElementSequence::nodes_per_element(), moab::PACK_DBLS(), moab::PACK_EH(), pack_entity_seq(), moab::PACK_INT(), moab::PACK_INTS(), pstatus_tag(), moab::Range::rbegin(), sequenceManager, moab::ParallelComm::Buffer::set_stored_size(), sharedp_tag(), moab::Range::size(), moab::Interface::tag_get_data(), moab::DebugOutput::tprintf(), moab::EntitySequence::type(), moab::TYPE_FROM_HANDLE(), moab::TupleList::vi_rd, and moab::TupleList::vul_rd.

Referenced by exchange_ghost_cells(), and pack_buffer().

◆ pack_entity_seq()

ErrorCode moab::ParallelComm::pack_entity_seq ( const int  nodes_per_entity,
const bool  store_remote_handles,
const int  to_proc,
Range these_ents,
std::vector< EntityHandle > &  entities,
Buffer buff 
)
private

pack a range of entities with equal # verts per entity, along with the range on the sending proc

Definition at line 1836 of file ParallelComm.cpp.

1842 {
1843  int tmp_space = 3 * sizeof( int ) + nodes_per_entity * these_ents.size() * sizeof( EntityHandle );
1844  buff->check_space( tmp_space );
1845 
1846  // Pack the entity type
1847  PACK_INT( buff->buff_ptr, ( (int)TYPE_FROM_HANDLE( *these_ents.begin() ) ) );
1848 
1849  // Pack # ents
1850  PACK_INT( buff->buff_ptr, these_ents.size() );
1851 
1852  // Pack the nodes per entity
1853  PACK_INT( buff->buff_ptr, nodes_per_entity );
1854  myDebug->tprintf( 3, "after some pack int %d \n", buff->get_current_size() );
1855 
1856  // Pack the connectivity
1857  std::vector< EntityHandle > connect;
1858  ErrorCode result = MB_SUCCESS;
1859  for( Range::const_iterator rit = these_ents.begin(); rit != these_ents.end(); ++rit )
1860  {
1861  connect.clear();
1862  result = mbImpl->get_connectivity( &( *rit ), 1, connect, false );MB_CHK_SET_ERR( result, "Failed to get connectivity" );
1863  assert( (int)connect.size() == nodes_per_entity );
1864  result =
1865  get_remote_handles( store_remote_handles, &connect[0], &connect[0], connect.size(), to_proc, entities_vec );MB_CHK_SET_ERR( result, "Failed in get_remote_handles" );
1866  PACK_EH( buff->buff_ptr, &connect[0], connect.size() );
1867  }
1868 
1869  myDebug->tprintf( 3, "Packed %lu ents of type %s\n", (unsigned long)these_ents.size(),
1870  CN::EntityTypeName( TYPE_FROM_HANDLE( *these_ents.begin() ) ) );
1871 
1872  return result;
1873 }

References moab::Range::begin(), moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), moab::Range::end(), moab::CN::EntityTypeName(), ErrorCode, moab::Interface::get_connectivity(), moab::ParallelComm::Buffer::get_current_size(), get_remote_handles(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, myDebug, moab::PACK_EH(), moab::PACK_INT(), moab::Range::size(), moab::DebugOutput::tprintf(), and moab::TYPE_FROM_HANDLE().

Referenced by pack_entities().

◆ pack_range_map()

ErrorCode moab::ParallelComm::pack_range_map ( Range this_range,
EntityHandle  actual_start,
HandleMap handle_map 
)
private

pack a range map with keys in this_range and values a contiguous series of handles starting at actual_start

Definition at line 3136 of file ParallelComm.cpp.

3137 {
3138  for( Range::const_pair_iterator key_it = key_range.const_pair_begin(); key_it != key_range.const_pair_end();
3139  ++key_it )
3140  {
3141  int tmp_num = ( *key_it ).second - ( *key_it ).first + 1;
3142  handle_map.insert( ( *key_it ).first, val_start, tmp_num );
3143  val_start += tmp_num;
3144  }
3145 
3146  return MB_SUCCESS;
3147 }

References moab::Range::const_pair_begin(), moab::Range::const_pair_end(), moab::RangeMap< KeyType, ValType, NullVal >::insert(), and MB_SUCCESS.

◆ pack_remote_handles()

ErrorCode moab::ParallelComm::pack_remote_handles ( std::vector< EntityHandle > &  L1hloc,
std::vector< EntityHandle > &  L1hrem,
std::vector< int > &  procs,
unsigned int  to_proc,
Buffer buff 
)

Definition at line 7370 of file ParallelComm.cpp.

7375 {
7376  assert( std::find( L1hloc.begin(), L1hloc.end(), (EntityHandle)0 ) == L1hloc.end() );
7377 
7378  // 2 vectors of handles plus ints
7379  buff->check_space( ( ( L1p.size() + 1 ) * sizeof( int ) + ( L1hloc.size() + 1 ) * sizeof( EntityHandle ) +
7380  ( L1hrem.size() + 1 ) * sizeof( EntityHandle ) ) );
7381 
7382  // Should be in pairs of handles
7383  PACK_INT( buff->buff_ptr, L1hloc.size() );
7384  PACK_INTS( buff->buff_ptr, &L1p[0], L1p.size() );
7385  // Pack handles in reverse order, (remote, local), so on destination they
7386  // are ordered (local, remote)
7387  PACK_EH( buff->buff_ptr, &L1hrem[0], L1hrem.size() );
7388  PACK_EH( buff->buff_ptr, &L1hloc[0], L1hloc.size() );
7389 
7390  buff->set_stored_size();
7391 
7392  return MB_SUCCESS;
7393 }

References moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), MB_SUCCESS, moab::PACK_EH(), moab::PACK_INT(), moab::PACK_INTS(), and moab::ParallelComm::Buffer::set_stored_size().

Referenced by exchange_ghost_cells(), exchange_owned_mesh(), recv_entities(), and recv_messages().

◆ pack_sets()

ErrorCode moab::ParallelComm::pack_sets ( Range entities,
Buffer buff,
const bool  store_handles,
const int  to_proc 
)
private

Definition at line 3149 of file ParallelComm.cpp.

3150 {
3151  // SETS:
3152  // . #sets
3153  // . for each set:
3154  // - options[#sets] (unsigned int)
3155  // - if (unordered) set range
3156  // - else if ordered
3157  // . #ents in set
3158  // . handles[#ents]
3159  // - #parents
3160  // - if (#parents) handles[#parents]
3161  // - #children
3162  // - if (#children) handles[#children]
3163 
3164  // Now the sets; assume any sets the application wants to pass are in the entities list
3165  ErrorCode result;
3166  Range all_sets = entities.subset_by_type( MBENTITYSET );
3167 
3168  int buff_size = estimate_sets_buffer_size( all_sets, store_remote_handles );
3169  if( buff_size < 0 ) MB_SET_ERR( MB_FAILURE, "Failed to estimate sets buffer size" );
3170  buff->check_space( buff_size );
3171 
3172  // Number of sets
3173  PACK_INT( buff->buff_ptr, all_sets.size() );
3174 
3175  // Options for all sets
3176  std::vector< unsigned int > options( all_sets.size() );
3177  Range::iterator rit;
3178  std::vector< EntityHandle > members;
3179  int i;
3180  for( rit = all_sets.begin(), i = 0; rit != all_sets.end(); ++rit, i++ )
3181  {
3182  result = mbImpl->get_meshset_options( *rit, options[i] );MB_CHK_SET_ERR( result, "Failed to get meshset options" );
3183  }
3184  buff->check_space( all_sets.size() * sizeof( unsigned int ) );
3185  PACK_VOID( buff->buff_ptr, &options[0], all_sets.size() * sizeof( unsigned int ) );
3186 
3187  // Pack parallel geometry unique id
3188  if( !all_sets.empty() )
3189  {
3190  Tag uid_tag;
3191  int n_sets = all_sets.size();
3192  bool b_pack = false;
3193  std::vector< int > id_data( n_sets );
3194  result =
3195  mbImpl->tag_get_handle( "PARALLEL_UNIQUE_ID", 1, MB_TYPE_INTEGER, uid_tag, MB_TAG_SPARSE | MB_TAG_CREAT );MB_CHK_SET_ERR( result, "Failed to create parallel geometry unique id tag" );
3196 
3197  result = mbImpl->tag_get_data( uid_tag, all_sets, &id_data[0] );
3198  if( MB_TAG_NOT_FOUND != result )
3199  {
3200  if( MB_SUCCESS != result ) MB_SET_ERR( result, "Failed to get parallel geometry unique ids" );
3201  for( i = 0; i < n_sets; i++ )
3202  {
3203  if( id_data[i] != 0 )
3204  {
3205  b_pack = true;
3206  break;
3207  }
3208  }
3209  }
3210 
3211  if( b_pack )
3212  { // If you find
3213  buff->check_space( ( n_sets + 1 ) * sizeof( int ) );
3214  PACK_INT( buff->buff_ptr, n_sets );
3215  PACK_INTS( buff->buff_ptr, &id_data[0], n_sets );
3216  }
3217  else
3218  {
3219  buff->check_space( sizeof( int ) );
3220  PACK_INT( buff->buff_ptr, 0 );
3221  }
3222  }
3223 
3224  // Vectors/ranges
3225  std::vector< EntityHandle > entities_vec( entities.size() );
3226  std::copy( entities.begin(), entities.end(), entities_vec.begin() );
3227  for( rit = all_sets.begin(), i = 0; rit != all_sets.end(); ++rit, i++ )
3228  {
3229  members.clear();
3230  result = mbImpl->get_entities_by_handle( *rit, members );MB_CHK_SET_ERR( result, "Failed to get entities in ordered set" );
3231  result =
3232  get_remote_handles( store_remote_handles, &members[0], &members[0], members.size(), to_proc, entities_vec );MB_CHK_SET_ERR( result, "Failed in get_remote_handles" );
3233  buff->check_space( members.size() * sizeof( EntityHandle ) + sizeof( int ) );
3234  PACK_INT( buff->buff_ptr, members.size() );
3235  PACK_EH( buff->buff_ptr, &members[0], members.size() );
3236  }
3237 
3238  // Pack parent/child sets
3239  if( !store_remote_handles )
3240  { // Only works not store remote handles
3241  // Pack numbers of parents/children
3242  unsigned int tot_pch = 0;
3243  int num_pch;
3244  buff->check_space( 2 * all_sets.size() * sizeof( int ) );
3245  for( rit = all_sets.begin(), i = 0; rit != all_sets.end(); ++rit, i++ )
3246  {
3247  // Pack parents
3248  result = mbImpl->num_parent_meshsets( *rit, &num_pch );MB_CHK_SET_ERR( result, "Failed to get num parents" );
3249  PACK_INT( buff->buff_ptr, num_pch );
3250  tot_pch += num_pch;
3251  result = mbImpl->num_child_meshsets( *rit, &num_pch );MB_CHK_SET_ERR( result, "Failed to get num children" );
3252  PACK_INT( buff->buff_ptr, num_pch );
3253  tot_pch += num_pch;
3254  }
3255 
3256  // Now pack actual parents/children
3257  members.clear();
3258  members.reserve( tot_pch );
3259  std::vector< EntityHandle > tmp_pch;
3260  for( rit = all_sets.begin(), i = 0; rit != all_sets.end(); ++rit, i++ )
3261  {
3262  result = mbImpl->get_parent_meshsets( *rit, tmp_pch );MB_CHK_SET_ERR( result, "Failed to get parents" );
3263  std::copy( tmp_pch.begin(), tmp_pch.end(), std::back_inserter( members ) );
3264  tmp_pch.clear();
3265  result = mbImpl->get_child_meshsets( *rit, tmp_pch );MB_CHK_SET_ERR( result, "Failed to get children" );
3266  std::copy( tmp_pch.begin(), tmp_pch.end(), std::back_inserter( members ) );
3267  tmp_pch.clear();
3268  }
3269  assert( members.size() == tot_pch );
3270  if( !members.empty() )
3271  {
3272  result = get_remote_handles( store_remote_handles, &members[0], &members[0], members.size(), to_proc,
3273  entities_vec );MB_CHK_SET_ERR( result, "Failed to get remote handles for set parent/child sets" );
3274 #ifndef NDEBUG
3275  // Check that all handles are either sets or maxtype
3276  for( unsigned int __j = 0; __j < members.size(); __j++ )
3277  assert( ( TYPE_FROM_HANDLE( members[__j] ) == MBMAXTYPE &&
3278  ID_FROM_HANDLE( members[__j] ) < (int)entities.size() ) ||
3279  TYPE_FROM_HANDLE( members[__j] ) == MBENTITYSET );
3280 #endif
3281  buff->check_space( members.size() * sizeof( EntityHandle ) );
3282  PACK_EH( buff->buff_ptr, &members[0], members.size() );
3283  }
3284  }
3285  else
3286  {
3287  buff->check_space( 2 * all_sets.size() * sizeof( int ) );
3288  for( rit = all_sets.begin(); rit != all_sets.end(); ++rit )
3289  {
3290  PACK_INT( buff->buff_ptr, 0 );
3291  PACK_INT( buff->buff_ptr, 0 );
3292  }
3293  }
3294 
3295  // Pack the handles
3296  if( store_remote_handles && !all_sets.empty() )
3297  {
3298  buff_size = RANGE_SIZE( all_sets );
3299  buff->check_space( buff_size );
3300  PACK_RANGE( buff->buff_ptr, all_sets );
3301  }
3302 
3303  myDebug->tprintf( 4, "Done packing sets.\n" );
3304 
3305  buff->set_stored_size();
3306 
3307  return MB_SUCCESS;
3308 }

References moab::Range::begin(), moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), moab::Range::empty(), moab::Range::end(), entities, ErrorCode, estimate_sets_buffer_size(), moab::Interface::get_child_meshsets(), moab::Interface::get_entities_by_handle(), moab::Interface::get_meshset_options(), moab::Interface::get_parent_meshsets(), get_remote_handles(), moab::ID_FROM_HANDLE(), MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_NOT_FOUND, MB_TAG_SPARSE, MB_TYPE_INTEGER, MBENTITYSET, mbImpl, MBMAXTYPE, myDebug, moab::Interface::num_child_meshsets(), moab::Interface::num_parent_meshsets(), moab::PACK_EH(), moab::PACK_INT(), moab::PACK_INTS(), moab::PACK_RANGE(), moab::PACK_VOID(), moab::RANGE_SIZE(), moab::ParallelComm::Buffer::set_stored_size(), moab::Range::size(), moab::Interface::tag_get_data(), moab::Interface::tag_get_handle(), moab::DebugOutput::tprintf(), and moab::TYPE_FROM_HANDLE().

Referenced by pack_buffer().

◆ pack_shared_handles()

ErrorCode moab::ParallelComm::pack_shared_handles ( std::vector< std::vector< SharedEntityData > > &  send_data)

Definition at line 8441 of file ParallelComm.cpp.

8442 {
8443  // Build up send buffers
8444  ErrorCode rval = MB_SUCCESS;
8445  int ent_procs[MAX_SHARING_PROCS];
8447  int num_sharing, tmp_int;
8448  SharedEntityData tmp;
8449  send_data.resize( buffProcs.size() );
8450  for( std::set< EntityHandle >::iterator i = sharedEnts.begin(); i != sharedEnts.end(); ++i )
8451  {
8452  tmp.remote = *i; // Swap local/remote so they're correct on the remote proc.
8453  rval = get_owner( *i, tmp_int );
8454  tmp.owner = tmp_int;
8455  if( MB_SUCCESS != rval ) return rval;
8456 
8457  unsigned char pstat;
8458  rval = get_sharing_data( *i, ent_procs, handles, pstat, num_sharing );
8459  if( MB_SUCCESS != rval ) return rval;
8460  for( int j = 0; j < num_sharing; j++ )
8461  {
8462  if( ent_procs[j] == (int)proc_config().proc_rank() ) continue;
8463  tmp.local = handles[j];
8464  int ind = get_buffers( ent_procs[j] );
8465  assert( -1 != ind );
8466  if( (int)send_data.size() < ind + 1 ) send_data.resize( ind + 1 );
8467  send_data[ind].push_back( tmp );
8468  }
8469  }
8470 
8471  return MB_SUCCESS;
8472 }

References buffProcs, ErrorCode, get_buffers(), get_owner(), get_sharing_data(), moab::ParallelComm::SharedEntityData::local, MAX_SHARING_PROCS, MB_SUCCESS, moab::ParallelComm::SharedEntityData::owner, proc_config(), moab::ParallelComm::SharedEntityData::remote, and sharedEnts.

Referenced by check_all_shared_handles().

◆ pack_tag()

ErrorCode moab::ParallelComm::pack_tag ( Tag  source_tag,
Tag  destination_tag,
const Range entities,
const std::vector< EntityHandle > &  whole_range,
Buffer buff,
const bool  store_remote_handles,
const int  to_proc 
)
private

Serialize tag data.

Parameters
source_tagThe tag for which data will be serialized
destination_tagTag in which to store unpacked tag data. Typically the same as source_tag.
entitiesThe entities for which tag values will be serialized
whole_rangeCalculate entity indices as location in this range
buff_ptrInput/Output: As input, pointer to the start of the buffer in which to serialize data. As output, the position just passed the serialized data.
count_outOutput: The required buffer size, in bytes.
store_handlesThe data for each tag is preceded by a list of EntityHandles designating the entity each of the subsequent tag values corresponds to. This value may be one of: 1) If store_handles == false: An invalid handle composed of {MBMAXTYPE,idx}, where idx is the position of the entity in "whole_range". 2) If store_hanldes == true and a valid remote handle exists, the remote handle. 3) If store_hanldes == true and no valid remote handle is defined for the entity, the same as 1).
to_procIf 'store_handles' is true, the processor rank for which to store the corresponding remote entity handles.

Definition at line 3556 of file ParallelComm.cpp.

3563 {
3564  ErrorCode result;
3565  std::vector< int > var_len_sizes;
3566  std::vector< const void* > var_len_values;
3567 
3568  if( src_tag != dst_tag )
3569  {
3570  if( dst_tag->get_size() != src_tag->get_size() ) return MB_TYPE_OUT_OF_RANGE;
3571  if( dst_tag->get_data_type() != src_tag->get_data_type() && dst_tag->get_data_type() != MB_TYPE_OPAQUE &&
3572  src_tag->get_data_type() != MB_TYPE_OPAQUE )
3573  return MB_TYPE_OUT_OF_RANGE;
3574  }
3575 
3576  // Size, type, data type
3577  buff->check_space( 3 * sizeof( int ) );
3578  PACK_INT( buff->buff_ptr, src_tag->get_size() );
3579  TagType this_type;
3580  result = mbImpl->tag_get_type( dst_tag, this_type );
3581  PACK_INT( buff->buff_ptr, (int)this_type );
3582  DataType data_type = src_tag->get_data_type();
3583  PACK_INT( buff->buff_ptr, (int)data_type );
3584  int type_size = TagInfo::size_from_data_type( data_type );
3585 
3586  // Default value
3587  if( NULL == src_tag->get_default_value() )
3588  {
3589  buff->check_space( sizeof( int ) );
3590  PACK_INT( buff->buff_ptr, 0 );
3591  }
3592  else
3593  {
3594  buff->check_space( src_tag->get_default_value_size() );
3595  PACK_BYTES( buff->buff_ptr, src_tag->get_default_value(), src_tag->get_default_value_size() );
3596  }
3597 
3598  // Name
3599  buff->check_space( src_tag->get_name().size() );
3600  PACK_BYTES( buff->buff_ptr, dst_tag->get_name().c_str(), dst_tag->get_name().size() );
3601 
3602  myDebug->tprintf( 4, "Packing tag \"%s\"", src_tag->get_name().c_str() );
3603  if( src_tag != dst_tag ) myDebug->tprintf( 4, " (as tag \"%s\")", dst_tag->get_name().c_str() );
3604  myDebug->tprintf( 4, "\n" );
3605 
3606  // Pack entities
3607  buff->check_space( tagged_entities.size() * sizeof( EntityHandle ) + sizeof( int ) );
3608  PACK_INT( buff->buff_ptr, tagged_entities.size() );
3609  std::vector< EntityHandle > dum_tagged_entities( tagged_entities.size() );
3610  result = get_remote_handles( store_remote_handles, tagged_entities, &dum_tagged_entities[0], to_proc, whole_vec );
3611  if( MB_SUCCESS != result )
3612  {
3613  if( myDebug->get_verbosity() == 3 )
3614  {
3615  std::cerr << "Failed to get remote handles for tagged entities:" << std::endl;
3616  tagged_entities.print( " " );
3617  }
3618  MB_SET_ERR( result, "Failed to get remote handles for tagged entities" );
3619  }
3620 
3621  PACK_EH( buff->buff_ptr, &dum_tagged_entities[0], dum_tagged_entities.size() );
3622 
3623  const size_t num_ent = tagged_entities.size();
3624  if( src_tag->get_size() == MB_VARIABLE_LENGTH )
3625  {
3626  var_len_sizes.resize( num_ent, 0 );
3627  var_len_values.resize( num_ent, 0 );
3628  result = mbImpl->tag_get_by_ptr( src_tag, tagged_entities, &var_len_values[0], &var_len_sizes[0] );MB_CHK_SET_ERR( result, "Failed to get variable-length tag data in pack_tags" );
3629  buff->check_space( num_ent * sizeof( int ) );
3630  PACK_INTS( buff->buff_ptr, &var_len_sizes[0], num_ent );
3631  for( unsigned int i = 0; i < num_ent; i++ )
3632  {
3633  buff->check_space( var_len_sizes[i] );
3634  PACK_VOID( buff->buff_ptr, var_len_values[i], type_size * var_len_sizes[i] );
3635  }
3636  }
3637  else
3638  {
3639  buff->check_space( num_ent * src_tag->get_size() );
3640  // Should be OK to read directly into buffer, since tags are untyped and
3641  // handled by memcpy
3642  result = mbImpl->tag_get_data( src_tag, tagged_entities, buff->buff_ptr );MB_CHK_SET_ERR( result, "Failed to get tag data in pack_tags" );
3643  buff->buff_ptr += num_ent * src_tag->get_size();
3644  PC( num_ent * src_tag->get_size(), " void" );
3645  }
3646 
3647  return MB_SUCCESS;
3648 }

References moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), ErrorCode, moab::TagInfo::get_data_type(), moab::TagInfo::get_default_value(), moab::TagInfo::get_default_value_size(), moab::TagInfo::get_name(), get_remote_handles(), moab::TagInfo::get_size(), moab::DebugOutput::get_verbosity(), MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, MB_TYPE_OPAQUE, MB_TYPE_OUT_OF_RANGE, MB_VARIABLE_LENGTH, mbImpl, myDebug, moab::PACK_BYTES(), moab::PACK_EH(), moab::PACK_INT(), moab::PACK_INTS(), moab::PACK_VOID(), PC, moab::Range::print(), moab::Range::size(), moab::TagInfo::size_from_data_type(), moab::Interface::tag_get_by_ptr(), moab::Interface::tag_get_data(), moab::Interface::tag_get_type(), TagType, and moab::DebugOutput::tprintf().

Referenced by pack_tags().

◆ pack_tags()

ErrorCode moab::ParallelComm::pack_tags ( Range entities,
const std::vector< Tag > &  src_tags,
const std::vector< Tag > &  dst_tags,
const std::vector< Range > &  tag_ranges,
Buffer buff,
const bool  store_handles,
const int  to_proc 
)
private

Serialize entity tag data.

This function operates in two passes. The first phase, specified by 'just_count == true' calculates the necessary buffer size for the serialized data. The second phase writes the actual binary serialized representation of the data to the passed buffer.

\NOTE First two arguments are not used. (Legacy interface?)

Parameters
entitiesNOT USED
start_ritNOT USED
whole_rangeShould be the union of the sets of entities for which tag values are to be serialized. Also specifies ordering for indexes for tag values and serves as the superset from which to compose entity lists from individual tags if just_count and all_possible_tags are both true.
buff_ptrBuffer into which to write binary serialized data
countOutput: The size of the serialized data is added to this parameter. NOTE: Should probably initialize to zero before calling.
just_countIf true, just calculate the buffer size required to hold the serialized data. Will also append to 'all_tags' and 'tag_ranges' if all_possible_tags == true.
store_handlesThe data for each tag is preceded by a list of EntityHandles designating the entity each of the subsequent tag values corresponds to. This value may be one of: 1) If store_handles == false: An invalid handle composed of {MBMAXTYPE,idx}, where idx is the position of the entity in "whole_range". 2) If store_hanldes == true and a valid remote handle exists, the remote handle. 3) If store_hanldes == true and no valid remote handle is defined for the entity, the same as 1).
to_procIf 'store_handles' is true, the processor rank for which to store the corresponding remote entity handles.
all_tagsList of tags to write
tag_rangesList of entities to serialize tag data, one for each corresponding tag handle in 'all_tags.

Definition at line 3470 of file ParallelComm.cpp.

3477 {
3478  ErrorCode result;
3479  std::vector< Tag >::const_iterator tag_it, dst_it;
3480  std::vector< Range >::const_iterator rit;
3481  int count = 0;
3482 
3483  for( tag_it = src_tags.begin(), rit = tag_ranges.begin(); tag_it != src_tags.end(); ++tag_it, ++rit )
3484  {
3485  result = packed_tag_size( *tag_it, *rit, count );
3486  if( MB_SUCCESS != result ) return result;
3487  }
3488 
3489  // Number of tags
3490  count += sizeof( int );
3491 
3492  buff->check_space( count );
3493 
3494  PACK_INT( buff->buff_ptr, src_tags.size() );
3495 
3496  std::vector< EntityHandle > entities_vec( entities.size() );
3497  std::copy( entities.begin(), entities.end(), entities_vec.begin() );
3498 
3499  for( tag_it = src_tags.begin(), dst_it = dst_tags.begin(), rit = tag_ranges.begin(); tag_it != src_tags.end();
3500  ++tag_it, ++dst_it, ++rit )
3501  {
3502  result = pack_tag( *tag_it, *dst_it, *rit, entities_vec, buff, store_remote_handles, to_proc );
3503  if( MB_SUCCESS != result ) return result;
3504  }
3505 
3506  myDebug->tprintf( 4, "Done packing tags." );
3507 
3508  buff->set_stored_size();
3509 
3510  return MB_SUCCESS;
3511 }

References moab::ParallelComm::Buffer::buff_ptr, moab::ParallelComm::Buffer::check_space(), entities, ErrorCode, MB_SUCCESS, myDebug, moab::PACK_INT(), pack_tag(), packed_tag_size(), moab::ParallelComm::Buffer::set_stored_size(), and moab::DebugOutput::tprintf().

Referenced by exchange_tags(), pack_buffer(), and reduce_tags().

◆ packed_tag_size()

ErrorCode moab::ParallelComm::packed_tag_size ( Tag  source_tag,
const Range entities,
int &  count_out 
)
private

Calculate buffer size required to pack tag data.

Parameters
source_tagThe tag for which data will be serialized
entitiesThe entities for which tag values will be serialized
count_outOutput: The required buffer size, in bytes.

Definition at line 3513 of file ParallelComm.cpp.

3514 {
3515  // For dense tags, compute size assuming all entities have that tag
3516  // For sparse tags, get number of entities w/ that tag to compute size
3517 
3518  std::vector< int > var_len_sizes;
3519  std::vector< const void* > var_len_values;
3520 
3521  // Default value
3522  count += sizeof( int );
3523  if( NULL != tag->get_default_value() ) count += tag->get_default_value_size();
3524 
3525  // Size, type, data type
3526  count += 3 * sizeof( int );
3527 
3528  // Name
3529  count += sizeof( int );
3530  count += tag->get_name().size();
3531 
3532  // Range of tag
3533  count += sizeof( int ) + tagged_entities.size() * sizeof( EntityHandle );
3534 
3535  if( tag->get_size() == MB_VARIABLE_LENGTH )
3536  {
3537  const int num_ent = tagged_entities.size();
3538  // Send a tag size for each entity
3539  count += num_ent * sizeof( int );
3540  // Send tag data for each entity
3541  var_len_sizes.resize( num_ent );
3542  var_len_values.resize( num_ent );
3543  ErrorCode result =
3544  tag->get_data( sequenceManager, errorHandler, tagged_entities, &var_len_values[0], &var_len_sizes[0] );MB_CHK_SET_ERR( result, "Failed to get lenghts of variable-length tag values" );
3545  count += std::accumulate( var_len_sizes.begin(), var_len_sizes.end(), 0 );
3546  }
3547  else
3548  {
3549  // Tag data values for range or vector
3550  count += tagged_entities.size() * tag->get_size();
3551  }
3552 
3553  return MB_SUCCESS;
3554 }

References ErrorCode, errorHandler, moab::TagInfo::get_data(), moab::TagInfo::get_default_value(), moab::TagInfo::get_default_value_size(), moab::TagInfo::get_name(), moab::TagInfo::get_size(), MB_CHK_SET_ERR, MB_SUCCESS, MB_VARIABLE_LENGTH, sequenceManager, and moab::Range::size().

Referenced by pack_tags().

◆ part_tag()

Tag moab::ParallelComm::part_tag ( )
inline

Definition at line 703 of file ParallelComm.hpp.

704  {
705  return partition_tag();
706  }

References partition_tag().

Referenced by create_part().

◆ partition_sets() [1/2]

◆ partition_sets() [2/2]

const Range& moab::ParallelComm::partition_sets ( ) const
inline

Definition at line 669 of file ParallelComm.hpp.

670  {
671  return partitionSets;
672  }

References partitionSets.

◆ partition_tag()

Tag moab::ParallelComm::partition_tag ( )

return partitions set tag

return partition set tag

Definition at line 7975 of file ParallelComm.cpp.

7976 {
7977  if( !partitionTag )
7978  {
7979  int dum_id = -1;
7981  MB_TAG_SPARSE | MB_TAG_CREAT, &dum_id );
7982  if( MB_SUCCESS != result ) return 0;
7983  }
7984 
7985  return partitionTag;
7986 }

References ErrorCode, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_SPARSE, MB_TYPE_INTEGER, mbImpl, PARALLEL_PARTITION_TAG_NAME, partitionTag, and moab::Interface::tag_get_handle().

Referenced by part_tag().

◆ pcomm_tag()

Tag moab::ParallelComm::pcomm_tag ( Interface impl,
bool  create_if_missing = true 
)
static

return pcomm tag; static because might not have a pcomm before going to look for one on the interface

return pcomm tag; passes in impl 'cuz this is a static function

Definition at line 7989 of file ParallelComm.cpp.

7990 {
7991  Tag this_tag = 0;
7992  ErrorCode result;
7993  if( create_if_missing )
7994  {
7995  result = impl->tag_get_handle( PARALLEL_COMM_TAG_NAME, MAX_SHARING_PROCS * sizeof( ParallelComm* ),
7996  MB_TYPE_OPAQUE, this_tag, MB_TAG_SPARSE | MB_TAG_CREAT );
7997  }
7998  else
7999  {
8000  result = impl->tag_get_handle( PARALLEL_COMM_TAG_NAME, MAX_SHARING_PROCS * sizeof( ParallelComm* ),
8001  MB_TYPE_OPAQUE, this_tag, MB_TAG_SPARSE );
8002  }
8003 
8004  if( MB_SUCCESS != result ) return 0;
8005 
8006  return this_tag;
8007 }

References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_SPARSE, MB_TYPE_OPAQUE, PARALLEL_COMM_TAG_NAME, and moab::Interface::tag_get_handle().

Referenced by add_pcomm(), get_all_pcomm(), get_pcomm(), remove_pcomm(), and set_partitioning().

◆ post_irecv() [1/2]

ErrorCode moab::ParallelComm::post_irecv ( std::vector< unsigned int > &  exchange_procs)

Post "MPI_Irecv" before meshing.

Parameters
exchange_procsprocessor vector exchanged

Definition at line 6768 of file ParallelComm.cpp.

6769 {
6770  // Set buffers
6771  int n_proc = exchange_procs.size();
6772  for( int i = 0; i < n_proc; i++ )
6773  get_buffers( exchange_procs[i] );
6775 
6776  // Post ghost irecv's for entities from all communicating procs
6777  // Index requests the same as buffer/sharing procs indices
6778  int success;
6779  recvReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL );
6780  recvRemotehReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL );
6781  sendReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL );
6782 
6783  int incoming = 0;
6784  for( int i = 0; i < n_proc; i++ )
6785  {
6786  int ind = get_buffers( exchange_procs[i] );
6787  incoming++;
6789  MB_MESG_ENTS_SIZE, incoming );
6790  success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, buffProcs[ind],
6792  if( success != MPI_SUCCESS )
6793  {
6794  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in owned entity exchange" );
6795  }
6796  }
6797 
6798  return MB_SUCCESS;
6799 }

References buffProcs, get_buffers(), INITIAL_BUFF_SIZE, moab::MB_MESG_ENTS_SIZE, MB_SET_ERR, MB_SUCCESS, PRINT_DEBUG_IRECV, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, recvRemotehReqs, recvReqs, remoteOwnedBuffs, reset_all_buffers(), and sendReqs.

◆ post_irecv() [2/2]

ErrorCode moab::ParallelComm::post_irecv ( std::vector< unsigned int > &  shared_procs,
std::set< unsigned int > &  recv_procs 
)

Definition at line 6801 of file ParallelComm.cpp.

6802 {
6803  // Set buffers
6804  int num = shared_procs.size();
6805  for( int i = 0; i < num; i++ )
6806  get_buffers( shared_procs[i] );
6808  num = remoteOwnedBuffs.size();
6809  for( int i = 0; i < num; i++ )
6810  remoteOwnedBuffs[i]->set_stored_size();
6811  num = localOwnedBuffs.size();
6812  for( int i = 0; i < num; i++ )
6813  localOwnedBuffs[i]->set_stored_size();
6814 
6815  // Post ghost irecv's for entities from all communicating procs
6816  // Index requests the same as buffer/sharing procs indices
6817  int success;
6818  recvReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL );
6819  recvRemotehReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL );
6820  sendReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL );
6821 
6822  int incoming = 0;
6823  std::set< unsigned int >::iterator it = recv_procs.begin();
6824  std::set< unsigned int >::iterator eit = recv_procs.end();
6825  for( ; it != eit; ++it )
6826  {
6827  int ind = get_buffers( *it );
6828  incoming++;
6830  MB_MESG_ENTS_SIZE, incoming );
6831  success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, buffProcs[ind],
6833  if( success != MPI_SUCCESS )
6834  {
6835  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in owned entity exchange" );
6836  }
6837  }
6838 
6839  return MB_SUCCESS;
6840 }

References buffProcs, get_buffers(), INITIAL_BUFF_SIZE, localOwnedBuffs, moab::MB_MESG_ENTS_SIZE, MB_SET_ERR, MB_SUCCESS, PRINT_DEBUG_IRECV, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, recvRemotehReqs, recvReqs, remoteOwnedBuffs, reset_all_buffers(), and sendReqs.

◆ print_buffer()

ErrorCode moab::ParallelComm::print_buffer ( unsigned char *  buff_ptr,
int  mesg_type,
int  from_proc,
bool  sent 
)
private

Definition at line 2357 of file ParallelComm.cpp.

2358 {
2359  std::cerr << procConfig.proc_rank();
2360  if( sent )
2361  std::cerr << " sent";
2362  else
2363  std::cerr << " received";
2364  std::cerr << " message type " << mesg_tag << " to/from proc " << from_proc << "; contents:" << std::endl;
2365 
2366  int msg_length, num_ents;
2367  unsigned char* orig_ptr = buff_ptr;
2368  UNPACK_INT( buff_ptr, msg_length );
2369  std::cerr << msg_length << " bytes..." << std::endl;
2370 
2371  if( MB_MESG_ENTS_SIZE == mesg_tag || MB_MESG_ENTS_LARGE == mesg_tag )
2372  {
2373  // 1. # entities = E
2374  int i, j, k;
2375  std::vector< int > ps;
2376  std::vector< EntityHandle > hs;
2377 
2378  UNPACK_INT( buff_ptr, num_ents );
2379  std::cerr << num_ents << " entities..." << std::endl;
2380 
2381  // Save place where remote handle info starts, then scan forward to ents
2382  for( i = 0; i < num_ents; i++ )
2383  {
2384  UNPACK_INT( buff_ptr, j );
2385  if( 0 > j ) return MB_FAILURE;
2386  ps.resize( j );
2387  hs.resize( j );
2388  std::cerr << "Entity " << i << ", # procs = " << j << std::endl;
2389  UNPACK_INTS( buff_ptr, &ps[0], j );
2390  UNPACK_EH( buff_ptr, &hs[0], j );
2391  std::cerr << " Procs: ";
2392  for( k = 0; k < j; k++ )
2393  std::cerr << ps[k] << " ";
2394  std::cerr << std::endl;
2395  std::cerr << " Handles: ";
2396  for( k = 0; k < j; k++ )
2397  std::cerr << hs[k] << " ";
2398  std::cerr << std::endl;
2399 
2400  if( buff_ptr - orig_ptr > msg_length )
2401  {
2402  std::cerr << "End of buffer..." << std::endl;
2403  std::cerr.flush();
2404  return MB_FAILURE;
2405  }
2406  }
2407 
2408  while( true )
2409  {
2410  EntityType this_type = MBMAXTYPE;
2411  UNPACK_TYPE( buff_ptr, this_type );
2412  assert( this_type != MBENTITYSET );
2413 
2414  // MBMAXTYPE signifies end of entities data
2415  if( MBMAXTYPE == this_type ) break;
2416 
2417  // Get the number of ents
2418  int num_ents2, verts_per_entity = 0;
2419  UNPACK_INT( buff_ptr, num_ents2 );
2420 
2421  // Unpack the nodes per entity
2422  if( MBVERTEX != this_type && num_ents2 )
2423  {
2424  UNPACK_INT( buff_ptr, verts_per_entity );
2425  }
2426 
2427  std::cerr << "Type: " << CN::EntityTypeName( this_type ) << "; num_ents = " << num_ents2;
2428  if( MBVERTEX != this_type ) std::cerr << "; verts_per_ent = " << verts_per_entity;
2429  std::cerr << std::endl;
2430  if( num_ents2 < 0 || num_ents2 > msg_length )
2431  {
2432  std::cerr << "Wrong number of entities, returning." << std::endl;
2433  return MB_FAILURE;
2434  }
2435 
2436  for( int e = 0; e < num_ents2; e++ )
2437  {
2438  // Check for existing entity, otherwise make new one
2439  if( MBVERTEX == this_type )
2440  {
2441  double coords[3];
2442  UNPACK_DBLS( buff_ptr, coords, 3 );
2443  std::cerr << "xyz = " << coords[0] << ", " << coords[1] << ", " << coords[2] << std::endl;
2444  }
2445  else
2446  {
2448  assert( verts_per_entity <= CN::MAX_NODES_PER_ELEMENT );
2449  UNPACK_EH( buff_ptr, connect, verts_per_entity );
2450 
2451  // Update connectivity to local handles
2452  std::cerr << "Connectivity: ";
2453  for( k = 0; k < verts_per_entity; k++ )
2454  std::cerr << connect[k] << " ";
2455  std::cerr << std::endl;
2456  }
2457 
2458  if( buff_ptr - orig_ptr > msg_length )
2459  {
2460  std::cerr << "End of buffer..." << std::endl;
2461  std::cerr.flush();
2462  return MB_FAILURE;
2463  }
2464  }
2465  }
2466  }
2467  else if( MB_MESG_REMOTEH_SIZE == mesg_tag || MB_MESG_REMOTEH_LARGE == mesg_tag )
2468  {
2469  UNPACK_INT( buff_ptr, num_ents );
2470  std::cerr << num_ents << " entities..." << std::endl;
2471  if( 0 > num_ents || num_ents > msg_length )
2472  {
2473  std::cerr << "Wrong number of entities, returning." << std::endl;
2474  return MB_FAILURE;
2475  }
2476  std::vector< EntityHandle > L1hloc( num_ents ), L1hrem( num_ents );
2477  std::vector< int > L1p( num_ents );
2478  UNPACK_INTS( buff_ptr, &L1p[0], num_ents );
2479  UNPACK_EH( buff_ptr, &L1hrem[0], num_ents );
2480  UNPACK_EH( buff_ptr, &L1hloc[0], num_ents );
2481  std::cerr << num_ents << " Entity pairs; hremote/hlocal/proc: " << std::endl;
2482  for( int i = 0; i < num_ents; i++ )
2483  {
2484  EntityType etype = TYPE_FROM_HANDLE( L1hloc[i] );
2485  std::cerr << CN::EntityTypeName( etype ) << ID_FROM_HANDLE( L1hrem[i] ) << ", "
2486  << CN::EntityTypeName( etype ) << ID_FROM_HANDLE( L1hloc[i] ) << ", " << L1p[i] << std::endl;
2487  }
2488 
2489  if( buff_ptr - orig_ptr > msg_length )
2490  {
2491  std::cerr << "End of buffer..." << std::endl;
2492  std::cerr.flush();
2493  return MB_FAILURE;
2494  }
2495  }
2496  else if( mesg_tag == MB_MESG_TAGS_SIZE || mesg_tag == MB_MESG_TAGS_LARGE )
2497  {
2498  int num_tags, dum1, data_type, tag_size;
2499  UNPACK_INT( buff_ptr, num_tags );
2500  std::cerr << "Number of tags = " << num_tags << std::endl;
2501  for( int i = 0; i < num_tags; i++ )
2502  {
2503  std::cerr << "Tag " << i << ":" << std::endl;
2504  UNPACK_INT( buff_ptr, tag_size );
2505  UNPACK_INT( buff_ptr, dum1 );
2506  UNPACK_INT( buff_ptr, data_type );
2507  std::cerr << "Tag size, type, data type = " << tag_size << ", " << dum1 << ", " << data_type << std::endl;
2508  UNPACK_INT( buff_ptr, dum1 );
2509  std::cerr << "Default value size = " << dum1 << std::endl;
2510  buff_ptr += dum1;
2511  UNPACK_INT( buff_ptr, dum1 );
2512  std::string name( (char*)buff_ptr, dum1 );
2513  std::cerr << "Tag name = " << name.c_str() << std::endl;
2514  buff_ptr += dum1;
2515  UNPACK_INT( buff_ptr, num_ents );
2516  std::cerr << "Number of ents = " << num_ents << std::endl;
2517  std::vector< EntityHandle > tmp_buff( num_ents );
2518  UNPACK_EH( buff_ptr, &tmp_buff[0], num_ents );
2519  int tot_length = 0;
2520  for( int j = 0; j < num_ents; j++ )
2521  {
2522  EntityType etype = TYPE_FROM_HANDLE( tmp_buff[j] );
2523  std::cerr << CN::EntityTypeName( etype ) << " " << ID_FROM_HANDLE( tmp_buff[j] ) << ", tag = ";
2524  if( tag_size == MB_VARIABLE_LENGTH )
2525  {
2526  UNPACK_INT( buff_ptr, dum1 );
2527  tot_length += dum1;
2528  std::cerr << "(variable, length = " << dum1 << ")" << std::endl;
2529  }
2530  else if( data_type == MB_TYPE_DOUBLE )
2531  {
2532  double dum_dbl;
2533  UNPACK_DBL( buff_ptr, dum_dbl );
2534  std::cerr << dum_dbl << std::endl;
2535  }
2536  else if( data_type == MB_TYPE_INTEGER )
2537  {
2538  int dum_int;
2539  UNPACK_INT( buff_ptr, dum_int );
2540  std::cerr << dum_int << std::endl;
2541  }
2542  else if( data_type == MB_TYPE_OPAQUE )
2543  {
2544  std::cerr << "(opaque)" << std::endl;
2545  buff_ptr += tag_size;
2546  }
2547  else if( data_type == MB_TYPE_HANDLE )
2548  {
2549  EntityHandle dum_eh;
2550  UNPACK_EH( buff_ptr, &dum_eh, 1 );
2551  std::cerr << dum_eh << std::endl;
2552  }
2553  else if( data_type == MB_TYPE_BIT )
2554  {
2555  std::cerr << "(bit)" << std::endl;
2556  buff_ptr += tag_size;
2557  }
2558  }
2559  if( tag_size == MB_VARIABLE_LENGTH ) buff_ptr += tot_length;
2560  }
2561  }
2562  else
2563  {
2564  assert( false );
2565  return MB_FAILURE;
2566  }
2567 
2568  std::cerr.flush();
2569 
2570  return MB_SUCCESS;
2571 }

References moab::CN::EntityTypeName(), moab::ID_FROM_HANDLE(), moab::CN::MAX_NODES_PER_ELEMENT, moab::MB_MESG_ENTS_LARGE, moab::MB_MESG_ENTS_SIZE, moab::MB_MESG_REMOTEH_LARGE, moab::MB_MESG_REMOTEH_SIZE, moab::MB_MESG_TAGS_LARGE, moab::MB_MESG_TAGS_SIZE, MB_SUCCESS, MB_TYPE_BIT, MB_TYPE_DOUBLE, MB_TYPE_HANDLE, MB_TYPE_INTEGER, MB_TYPE_OPAQUE, MB_VARIABLE_LENGTH, MBENTITYSET, MBMAXTYPE, MBVERTEX, moab::ProcConfig::proc_rank(), procConfig, moab::TYPE_FROM_HANDLE(), moab::UNPACK_DBL(), moab::UNPACK_DBLS(), moab::UNPACK_EH(), moab::UNPACK_INT(), moab::UNPACK_INTS(), and moab::UNPACK_TYPE().

Referenced by exchange_ghost_cells(), exchange_owned_mesh(), and recv_entities().

◆ print_debug_irecv()

void moab::ParallelComm::print_debug_irecv ( int  to,
int  from,
unsigned char *  buff,
int  size,
int  tag,
int  incoming 
)
private

Definition at line 255 of file ParallelComm.cpp.

256 {
257  myDebug->tprintf( 3, "Irecv, %d<-%d, buffer ptr = %p, tag=%d, size=%d", to, from, (void*)buff, tag, sz );
258  if( tag < MB_MESG_REMOTEH_ACK )
259  myDebug->printf( 3, ", incoming1=%d\n", incoming );
260  else if( tag < MB_MESG_TAGS_ACK )
261  myDebug->printf( 3, ", incoming2=%d\n", incoming );
262  else
263  myDebug->printf( 3, ", incoming=%d\n", incoming );
264 }

References moab::MB_MESG_REMOTEH_ACK, moab::MB_MESG_TAGS_ACK, myDebug, moab::DebugOutput::printf(), and moab::DebugOutput::tprintf().

◆ print_debug_isend()

void moab::ParallelComm::print_debug_isend ( int  from,
int  to,
unsigned char *  buff,
int  tag,
int  size 
)
private

Definition at line 250 of file ParallelComm.cpp.

251 {
252  myDebug->tprintf( 3, "Isend, %d->%d, buffer ptr = %p, tag=%d, size=%d\n", from, to, (void*)buff, tag, sz );
253 }

References myDebug, and moab::DebugOutput::tprintf().

◆ print_debug_recd()

void moab::ParallelComm::print_debug_recd ( MPI_Status  status)
private

Definition at line 266 of file ParallelComm.cpp.

267 {
268  if( myDebug->get_verbosity() == 3 )
269  {
270  int this_count;
271  int success = MPI_Get_count( &status, MPI_UNSIGNED_CHAR, &this_count );
272  if( MPI_SUCCESS != success ) this_count = -1;
273  myDebug->tprintf( 3, "Received from %d, count = %d, tag = %d\n", status.MPI_SOURCE, this_count,
274  status.MPI_TAG );
275  }
276 }

References moab::DebugOutput::get_verbosity(), myDebug, and moab::DebugOutput::tprintf().

◆ print_debug_waitany()

void moab::ParallelComm::print_debug_waitany ( std::vector< MPI_Request > &  reqs,
int  tag,
int  proc 
)
private

Definition at line 278 of file ParallelComm.cpp.

279 {
280  if( myDebug->get_verbosity() == 3 )
281  {
282  myDebug->tprintf( 3, "Waitany, p=%d, ", proc );
283  if( tag < MB_MESG_REMOTEH_ACK )
284  myDebug->print( 3, ", recv_ent_reqs=" );
285  else if( tag < MB_MESG_TAGS_ACK )
286  myDebug->print( 3, ", recv_remoteh_reqs=" );
287  else
288  myDebug->print( 3, ", recv_tag_reqs=" );
289  for( unsigned int i = 0; i < reqs.size(); i++ )
290  myDebug->printf( 3, " %p", (void*)(intptr_t)reqs[i] );
291  myDebug->print( 3, "\n" );
292  }
293 }

References moab::DebugOutput::get_verbosity(), moab::MB_MESG_REMOTEH_ACK, moab::MB_MESG_TAGS_ACK, myDebug, moab::DebugOutput::print(), moab::DebugOutput::printf(), and moab::DebugOutput::tprintf().

◆ print_pstatus() [1/2]

void moab::ParallelComm::print_pstatus ( unsigned char  pstat)

print contents of pstatus value in human-readable form to std::cut

Definition at line 9339 of file ParallelComm.cpp.

9340 {
9341  std::string str;
9342  print_pstatus( pstat, str );
9343  std::cout << str.c_str() << std::endl;
9344 }

References print_pstatus().

◆ print_pstatus() [2/2]

void moab::ParallelComm::print_pstatus ( unsigned char  pstat,
std::string &  ostr 
)

print contents of pstatus value in human-readable form

Definition at line 9316 of file ParallelComm.cpp.

9317 {
9318  std::ostringstream str;
9319  int num = 0;
9320 #define ppstat( a, b ) \
9321  { \
9322  if( pstat & ( a ) ) \
9323  { \
9324  if( num ) str << ", "; \
9325  str << ( b ); \
9326  num++; \
9327  } \
9328  }
9329 
9330  ppstat( PSTATUS_NOT_OWNED, "NOT_OWNED" );
9331  ppstat( PSTATUS_SHARED, "SHARED" );
9332  ppstat( PSTATUS_MULTISHARED, "MULTISHARED" );
9333  ppstat( PSTATUS_INTERFACE, "INTERFACE" );
9334  ppstat( PSTATUS_GHOST, "GHOST" );
9335 
9336  ostr = str.str();
9337 }

References ppstat, PSTATUS_GHOST, PSTATUS_INTERFACE, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, and PSTATUS_SHARED.

Referenced by print_pstatus().

◆ proc_config() [1/2]

ProcConfig& moab::ParallelComm::proc_config ( )
inline

Get proc config for this communication object.

Definition at line 639 of file ParallelComm.hpp.

640  {
641  return procConfig;
642  }

References procConfig.

◆ proc_config() [2/2]

const ProcConfig& moab::ParallelComm::proc_config ( ) const
inline

Get proc config for this communication object.

Definition at line 633 of file ParallelComm.hpp.

634  {
635  return procConfig;
636  }

References procConfig.

Referenced by check_clean_iface(), moab::NCWriteGCRM::collect_mesh_info(), moab::ScdNCWriteHelper::collect_mesh_info(), moab::NCWriteHOMME::collect_mesh_info(), moab::NCWriteMPAS::collect_mesh_info(), collective_sync_partition(), comm(), moab::WriteHDF5Parallel::communicate_shared_set_data(), moab::WriteHDF5Parallel::communicate_shared_set_ids(), moab::ParCommGraph::compute_partition(), create_coarse_mesh(), moab::WriteHDF5Parallel::create_dataset(), create_fine_mesh(), create_interface_sets(), moab::NCHelperESMF::create_mesh(), moab::NCHelperGCRM::create_mesh(), moab::NCHelperHOMME::create_mesh(), moab::NCHelperMPAS::create_mesh(), moab::NCHelperScrip::create_mesh(), moab::WriteHDF5Parallel::create_meshset_tables(), create_part(), moab::ReadParallel::create_partition_sets(), moab::WriteHDF5Parallel::create_tag_tables(), moab::WriteHDF5Parallel::debug_barrier_line(), moab::ReadParallel::delete_nonlocal_entities(), moab::Coupler::do_normalization(), moab::WriteHDF5Parallel::exchange_file_ids(), moab::ReadHDF5::find_sets_containing(), gather_data(), moab::ParallelData::get_interface_sets(), moab::Coupler::get_matching_entities(), get_owner_handle(), get_owning_part(), get_part_handle(), get_part_id(), get_proc_nvecs(), moab::ScdInterface::get_shared_vertices(), get_sharing_parts(), iMOAB_SetDoubleTagStorageWithGid(), moab::NCHelperDomain::init_mesh_vals(), moab::NCHelperEuler::init_mesh_vals(), moab::NCHelperFV::init_mesh_vals(), moab::Coupler::initialize_tree(), moab::Coupler::interpolate(), moab::DataCoupler::interpolate(), intersection_at_level(), moab::ReadParallel::load_file(), moab::ReadDamsel::load_file(), moab::Coupler::locate_points(), main(), moab::WriteHDF5Parallel::negotiate_type_list(), pack_shared_handles(), moab::WriteHDF5Parallel::parallel_create_file(), moab::ReadHDF5::print_times(), moab::WriteHDF5Parallel::print_times(), rank(), moab::ReadHDF5::read_all_set_meta(), moab::ReadParallel::ReadParallel(), resolve_shared_ents(), resolve_shared_sets(), moab::ReadHDF5::set_up_read(), size(), moab::ScdInterface::tag_shared_vertices(), tag_shared_verts(), test_intx_in_parallel_elem_based(), update_remote_data_old(), and moab::NCWriteHelper::write_set_variables().

◆ pstatus_tag()

◆ rank()

◆ recv_buffer()

ErrorCode moab::ParallelComm::recv_buffer ( int  mesg_tag_expected,
const MPI_Status &  mpi_status,
Buffer recv_buff,
MPI_Request &  recv_2nd_req,
MPI_Request &  ack_req,
int &  this_incoming,
Buffer send_buff,
MPI_Request &  send_req,
MPI_Request &  sent_ack_req,
bool &  done,
Buffer next_buff = NULL,
int  next_tag = -1,
MPI_Request *  next_req = NULL,
int *  next_incoming = NULL 
)
private

process incoming message; if longer than the initial size, post recv for next part then send ack; if ack, send second part; else indicate that we're done and buffer is ready for processing

Definition at line 6139 of file ParallelComm.cpp.

6153 {
6154  // Process a received message; if there will be more coming,
6155  // post a receive for 2nd part then send an ack message
6156  int from_proc = mpi_status.MPI_SOURCE;
6157  int success;
6158 
6159  // Set the buff_ptr on the recv_buffer; needs to point beyond any
6160  // valid data already in the buffer
6161  recv_buff->reset_ptr( std::min( recv_buff->get_stored_size(), (int)recv_buff->alloc_size ) );
6162 
6163  if( mpi_status.MPI_TAG == mesg_tag_expected && recv_buff->get_stored_size() > (int)INITIAL_BUFF_SIZE )
6164  {
6165  // 1st message & large - allocate buffer, post irecv for 2nd message,
6166  // then send ack
6167  recv_buff->reserve( recv_buff->get_stored_size() );
6168  assert( recv_buff->alloc_size > INITIAL_BUFF_SIZE );
6169 
6170  // Will expect a 2nd message
6171  this_incoming++;
6172 
6173  PRINT_DEBUG_IRECV( procConfig.proc_rank(), from_proc, recv_buff->mem_ptr + INITIAL_BUFF_SIZE,
6174  recv_buff->get_stored_size() - INITIAL_BUFF_SIZE, mesg_tag_expected + 1, this_incoming );
6175  success = MPI_Irecv( recv_buff->mem_ptr + INITIAL_BUFF_SIZE, recv_buff->get_stored_size() - INITIAL_BUFF_SIZE,
6176  MPI_UNSIGNED_CHAR, from_proc, mesg_tag_expected + 1, procConfig.proc_comm(), &recv_req );
6177  if( success != MPI_SUCCESS )
6178  {
6179  MB_SET_ERR( MB_FAILURE, "Failed to post 2nd iRecv in ghost exchange" );
6180  }
6181 
6182  // Send ack, doesn't matter what data actually is
6183  PRINT_DEBUG_ISEND( procConfig.proc_rank(), from_proc, recv_buff->mem_ptr, mesg_tag_expected - 1,
6184  sizeof( int ) );
6185  success = MPI_Isend( recv_buff->mem_ptr, sizeof( int ), MPI_UNSIGNED_CHAR, from_proc, mesg_tag_expected - 1,
6186  procConfig.proc_comm(), &sent_ack_req );
6187  if( success != MPI_SUCCESS )
6188  {
6189  MB_SET_ERR( MB_FAILURE, "Failed to send ack in ghost exchange" );
6190  }
6191  }
6192  else if( mpi_status.MPI_TAG == mesg_tag_expected - 1 )
6193  {
6194  // Got an ack back, send the 2nd half of message
6195 
6196  // Should be a large message if we got this
6197  assert( *( (size_t*)send_buff->mem_ptr ) > INITIAL_BUFF_SIZE );
6198 
6199  // Post irecv for next message, then send 2nd message
6200  if( next_buff )
6201  {
6202  // We'll expect a return message
6203  ( *next_incoming )++;
6204  PRINT_DEBUG_IRECV( procConfig.proc_rank(), from_proc, next_buff->mem_ptr, INITIAL_BUFF_SIZE, next_tag,
6205  *next_incoming );
6206 
6207  success = MPI_Irecv( next_buff->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, from_proc, next_tag,
6208  procConfig.proc_comm(), next_req );
6209  if( success != MPI_SUCCESS )
6210  {
6211  MB_SET_ERR( MB_FAILURE, "Failed to post next irecv in ghost exchange" );
6212  }
6213  }
6214 
6215  // Send 2nd message
6216  PRINT_DEBUG_ISEND( procConfig.proc_rank(), from_proc, send_buff->mem_ptr + INITIAL_BUFF_SIZE,
6217  mesg_tag_expected + 1, send_buff->get_stored_size() - INITIAL_BUFF_SIZE );
6218 
6219  assert( send_buff->get_stored_size() - INITIAL_BUFF_SIZE < send_buff->alloc_size &&
6220  0 <= send_buff->get_stored_size() );
6221  success = MPI_Isend( send_buff->mem_ptr + INITIAL_BUFF_SIZE, send_buff->get_stored_size() - INITIAL_BUFF_SIZE,
6222  MPI_UNSIGNED_CHAR, from_proc, mesg_tag_expected + 1, procConfig.proc_comm(), &send_req );
6223  if( success != MPI_SUCCESS )
6224  {
6225  MB_SET_ERR( MB_FAILURE, "Failed to send 2nd message in ghost exchange" );
6226  }
6227  }
6228  else if( ( mpi_status.MPI_TAG == mesg_tag_expected && recv_buff->get_stored_size() <= (int)INITIAL_BUFF_SIZE ) ||
6229  mpi_status.MPI_TAG == mesg_tag_expected + 1 )
6230  {
6231  // Message completely received - signal that we're done
6232  done = true;
6233  }
6234 
6235  return MB_SUCCESS;
6236 }

References moab::ParallelComm::Buffer::alloc_size, moab::ParallelComm::Buffer::get_stored_size(), INITIAL_BUFF_SIZE, MB_SET_ERR, MB_SUCCESS, moab::ParallelComm::Buffer::mem_ptr, PRINT_DEBUG_IRECV, PRINT_DEBUG_ISEND, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, moab::ParallelComm::Buffer::reserve(), and moab::ParallelComm::Buffer::reset_ptr().

Referenced by exchange_ghost_cells(), exchange_owned_mesh(), exchange_tags(), recv_entities(), recv_messages(), recv_remote_handle_messages(), reduce_tags(), send_recv_entities(), and settle_intersection_points().

◆ recv_entities() [1/2]

ErrorCode moab::ParallelComm::recv_entities ( const int  from_proc,
const bool  store_remote_handles,
const bool  is_iface,
Range final_ents,
int &  incomming1,
int &  incoming2,
std::vector< std::vector< EntityHandle > > &  L1hloc,
std::vector< std::vector< EntityHandle > > &  L1hrem,
std::vector< std::vector< int > > &  L1p,
std::vector< EntityHandle > &  L2hloc,
std::vector< EntityHandle > &  L2hrem,
std::vector< unsigned int > &  L2p,
std::vector< MPI_Request > &  recv_remoteh_reqs,
bool  wait_all = true 
)

Receive entities from another processor, optionally waiting until it's done.

Receive entities from another processor, with adjs, sets, and tags. If store_remote_handles is true, this call sends back handles assigned to the entities received.

Parameters
from_procSource processor
store_remote_handlesIf true, send message with new entity handles to source processor (currently unsupported)
final_entsRange containing all entities received
incomingkeep track if any messages are coming to this processor (newly added)
wait_allIf true, wait until all messages received/sent complete

Definition at line 1075 of file ParallelComm.cpp.

1089 {
1090 #ifndef MOAB_HAVE_MPI
1091  return MB_FAILURE;
1092 #else
1093  // Non-blocking receive for the first message (having size info)
1094  int ind1 = get_buffers( from_proc );
1095  incoming1++;
1097  MB_MESG_ENTS_SIZE, incoming1 );
1098  int success = MPI_Irecv( remoteOwnedBuffs[ind1]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, from_proc,
1100  if( success != MPI_SUCCESS )
1101  {
1102  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in ghost exchange" );
1103  }
1104 
1105  // Receive messages in while loop
1106  return recv_messages( from_proc, store_remote_handles, is_iface, final_ents, incoming1, incoming2, L1hloc, L1hrem,
1107  L1p, L2hloc, L2hrem, L2p, recv_remoteh_reqs );
1108 #endif
1109 }

References get_buffers(), INITIAL_BUFF_SIZE, moab::MB_MESG_ENTS_SIZE, MB_SET_ERR, PRINT_DEBUG_IRECV, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, recv_messages(), recvReqs, and remoteOwnedBuffs.

◆ recv_entities() [2/2]

ErrorCode moab::ParallelComm::recv_entities ( std::set< unsigned int > &  recv_procs,
int  incoming1,
int  incoming2,
const bool  store_remote_handles,
const bool  migrate = false 
)

Definition at line 1111 of file ParallelComm.cpp.

1116 {
1117  //===========================================
1118  // Receive/unpack new entities
1119  //===========================================
1120  // Number of incoming messages is the number of procs we communicate with
1121  int success, ind, i;
1122  ErrorCode result;
1123  MPI_Status status;
1124  std::vector< std::vector< EntityHandle > > recd_ents( buffProcs.size() );
1125  std::vector< std::vector< EntityHandle > > L1hloc( buffProcs.size() ), L1hrem( buffProcs.size() );
1126  std::vector< std::vector< int > > L1p( buffProcs.size() );
1127  std::vector< EntityHandle > L2hloc, L2hrem;
1128  std::vector< unsigned int > L2p;
1129  std::vector< EntityHandle > new_ents;
1130 
1131  while( incoming1 )
1132  {
1133  // Wait for all recvs of ents before proceeding to sending remote handles,
1134  // b/c some procs may have sent to a 3rd proc ents owned by me;
1136 
1137  success = MPI_Waitany( 2 * buffProcs.size(), &recvReqs[0], &ind, &status );
1138  if( MPI_SUCCESS != success )
1139  {
1140  MB_SET_ERR( MB_FAILURE, "Failed in waitany in owned entity exchange" );
1141  }
1142 
1143  PRINT_DEBUG_RECD( status );
1144 
1145  // OK, received something; decrement incoming counter
1146  incoming1--;
1147  bool done = false;
1148 
1149  // In case ind is for ack, we need index of one before it
1150  unsigned int base_ind = 2 * ( ind / 2 );
1151  result = recv_buffer( MB_MESG_ENTS_SIZE, status, remoteOwnedBuffs[ind / 2], recvReqs[ind], recvReqs[ind + 1],
1152  incoming1, localOwnedBuffs[ind / 2], sendReqs[base_ind], sendReqs[base_ind + 1], done,
1153  ( store_remote_handles ? localOwnedBuffs[ind / 2] : NULL ), MB_MESG_REMOTEH_SIZE,
1154  &recvRemotehReqs[base_ind], &incoming2 );MB_CHK_SET_ERR( result, "Failed to receive buffer" );
1155 
1156  if( done )
1157  {
1158  if( myDebug->get_verbosity() == 4 )
1159  {
1160  msgs.resize( msgs.size() + 1 );
1161  msgs.back() = new Buffer( *remoteOwnedBuffs[ind / 2] );
1162  }
1163 
1164  // Message completely received - process buffer that was sent
1165  remoteOwnedBuffs[ind / 2]->reset_ptr( sizeof( int ) );
1166  result = unpack_buffer( remoteOwnedBuffs[ind / 2]->buff_ptr, store_remote_handles, buffProcs[ind / 2],
1167  ind / 2, L1hloc, L1hrem, L1p, L2hloc, L2hrem, L2p, new_ents, true );
1168  if( MB_SUCCESS != result )
1169  {
1170  std::cout << "Failed to unpack entities. Buffer contents:" << std::endl;
1171  print_buffer( remoteOwnedBuffs[ind / 2]->mem_ptr, MB_MESG_ENTS_SIZE, buffProcs[ind / 2], false );
1172  return result;
1173  }
1174 
1175  if( recvReqs.size() != 2 * buffProcs.size() )
1176  {
1177  // Post irecv's for remote handles from new proc
1178  recvRemotehReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL );
1179  for( i = recvReqs.size(); i < (int)( 2 * buffProcs.size() ); i += 2 )
1180  {
1181  localOwnedBuffs[i / 2]->reset_buffer();
1182  incoming2++;
1183  PRINT_DEBUG_IRECV( procConfig.proc_rank(), buffProcs[i / 2], localOwnedBuffs[i / 2]->mem_ptr,
1185  success = MPI_Irecv( localOwnedBuffs[i / 2]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR,
1187  &recvRemotehReqs[i] );
1188  if( success != MPI_SUCCESS )
1189  {
1190  MB_SET_ERR( MB_FAILURE, "Failed to post irecv for remote handles in ghost exchange" );
1191  }
1192  }
1193  recvReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL );
1194  sendReqs.resize( 2 * buffProcs.size(), MPI_REQUEST_NULL );
1195  }
1196  }
1197  }
1198 
1199  // Assign and remove newly created elements from/to receive processor
1200  result = assign_entities_part( new_ents, procConfig.proc_rank() );MB_CHK_SET_ERR( result, "Failed to assign entities to part" );
1201  if( migrate )
1202  {
1203  // result = remove_entities_part(allsent, procConfig.proc_rank());MB_CHK_SET_ERR(ressult,
1204  // "Failed to remove entities to part");
1205  }
1206 
1207  // Add requests for any new addl procs
1208  if( recvReqs.size() != 2 * buffProcs.size() )
1209  {
1210  // Shouldn't get here...
1211  MB_SET_ERR( MB_FAILURE, "Requests length doesn't match proc count in entity exchange" );
1212  }
1213 
1214 #ifdef MOAB_HAVE_MPE
1215  if( myDebug->get_verbosity() == 2 )
1216  {
1217  MPE_Log_event( ENTITIES_END, procConfig.proc_rank(), "Ending recv entities." );
1218  }
1219 #endif
1220 
1221  //===========================================
1222  // Send local handles for new entity to owner
1223  //===========================================
1224  std::set< unsigned int >::iterator it = recv_procs.begin();
1225  std::set< unsigned int >::iterator eit = recv_procs.end();
1226  for( ; it != eit; ++it )
1227  {
1228  ind = get_buffers( *it );
1229  // Reserve space on front for size and for initial buff size
1230  remoteOwnedBuffs[ind]->reset_buffer( sizeof( int ) );
1231 
1232  result = pack_remote_handles( L1hloc[ind], L1hrem[ind], L1p[ind], buffProcs[ind], remoteOwnedBuffs[ind] );MB_CHK_SET_ERR( result, "Failed to pack remote handles" );
1233  remoteOwnedBuffs[ind]->set_stored_size();
1234 
1235  if( myDebug->get_verbosity() == 4 )
1236  {
1237  msgs.resize( msgs.size() + 1 );
1238  msgs.back() = new Buffer( *remoteOwnedBuffs[ind] );
1239  }
1240  result = send_buffer( buffProcs[ind], remoteOwnedBuffs[ind], MB_MESG_REMOTEH_SIZE, sendReqs[2 * ind],
1241  recvRemotehReqs[2 * ind + 1], &ackbuff, incoming2 );MB_CHK_SET_ERR( result, "Failed to send remote handles" );
1242  }
1243 
1244  //===========================================
1245  // Process remote handles of my ghosteds
1246  //===========================================
1247  while( incoming2 )
1248  {
1250  success = MPI_Waitany( 2 * buffProcs.size(), &recvRemotehReqs[0], &ind, &status );
1251  if( MPI_SUCCESS != success )
1252  {
1253  MB_SET_ERR( MB_FAILURE, "Failed in waitany in owned entity exchange" );
1254  }
1255 
1256  // OK, received something; decrement incoming counter
1257  incoming2--;
1258 
1259  PRINT_DEBUG_RECD( status );
1260  bool done = false;
1261  unsigned int base_ind = 2 * ( ind / 2 );
1262  result = recv_buffer( MB_MESG_REMOTEH_SIZE, status, localOwnedBuffs[ind / 2], recvRemotehReqs[ind],
1263  recvRemotehReqs[ind + 1], incoming2, remoteOwnedBuffs[ind / 2], sendReqs[base_ind],
1264  sendReqs[base_ind + 1], done );MB_CHK_SET_ERR( result, "Failed to receive remote handles" );
1265  if( done )
1266  {
1267  // Incoming remote handles
1268  if( myDebug->get_verbosity() == 4 )
1269  {
1270  msgs.resize( msgs.size() + 1 );
1271  msgs.back() = new Buffer( *localOwnedBuffs[ind] );
1272  }
1273 
1274  localOwnedBuffs[ind / 2]->reset_ptr( sizeof( int ) );
1275  result =
1276  unpack_remote_handles( buffProcs[ind / 2], localOwnedBuffs[ind / 2]->buff_ptr, L2hloc, L2hrem, L2p );MB_CHK_SET_ERR( result, "Failed to unpack remote handles" );
1277  }
1278  }
1279 
1280 #ifdef MOAB_HAVE_MPE
1281  if( myDebug->get_verbosity() == 2 )
1282  {
1283  MPE_Log_event( RHANDLES_END, procConfig.proc_rank(), "Ending remote handles." );
1284  MPE_Log_event( OWNED_END, procConfig.proc_rank(), "Ending recv entities (still doing checks)." );
1285  }
1286 #endif
1287  myDebug->tprintf( 1, "Exiting recv_entities.\n" );
1288 
1289  return MB_SUCCESS;
1290 }

References ackbuff, assign_entities_part(), buffProcs, ErrorCode, get_buffers(), moab::DebugOutput::get_verbosity(), INITIAL_BUFF_SIZE, localOwnedBuffs, MB_CHK_SET_ERR, moab::MB_MESG_ENTS_SIZE, moab::MB_MESG_REMOTEH_SIZE, MB_SET_ERR, MB_SUCCESS, MPE_Log_event, moab::msgs, myDebug, pack_remote_handles(), print_buffer(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, recv_buffer(), recvRemotehReqs, recvReqs, remoteOwnedBuffs, send_buffer(), sendReqs, moab::DebugOutput::tprintf(), unpack_buffer(), and unpack_remote_handles().

◆ recv_messages()

ErrorCode moab::ParallelComm::recv_messages ( const int  from_proc,
const bool  store_remote_handles,
const bool  is_iface,
Range final_ents,
int &  incoming1,
int &  incoming2,
std::vector< std::vector< EntityHandle > > &  L1hloc,
std::vector< std::vector< EntityHandle > > &  L1hrem,
std::vector< std::vector< int > > &  L1p,
std::vector< EntityHandle > &  L2hloc,
std::vector< EntityHandle > &  L2hrem,
std::vector< unsigned int > &  L2p,
std::vector< MPI_Request > &  recv_remoteh_reqs 
)

Receive messages from another processor in while loop.

Receive messages from another processor.

Parameters
from_procSource processor
store_remote_handlesIf true, send message with new entity handles to source processor (currently unsupported)
final_entsRange containing all entities received
incomingkeep track if any messages are coming to this processor (newly added)

Definition at line 1292 of file ParallelComm.cpp.

1305 {
1306 #ifndef MOAB_HAVE_MPI
1307  return MB_FAILURE;
1308 #else
1309  MPI_Status status;
1310  ErrorCode result;
1311  int ind1 = get_buffers( from_proc );
1312  int success, ind2;
1313  std::vector< EntityHandle > new_ents;
1314 
1315  // Wait and receive messages
1316  while( incoming1 )
1317  {
1319  success = MPI_Waitany( 2, &recvReqs[2 * ind1], &ind2, &status );
1320  if( MPI_SUCCESS != success )
1321  {
1322  MB_SET_ERR( MB_FAILURE, "Failed in waitany in recv_messages" );
1323  }
1324 
1325  PRINT_DEBUG_RECD( status );
1326 
1327  // OK, received something; decrement incoming counter
1328  incoming1--;
1329  bool done = false;
1330 
1331  // In case ind is for ack, we need index of one before it
1332  ind2 += 2 * ind1;
1333  unsigned int base_ind = 2 * ( ind2 / 2 );
1334 
1335  result = recv_buffer( MB_MESG_ENTS_SIZE, status, remoteOwnedBuffs[ind2 / 2],
1336  // recvbuff,
1337  recvReqs[ind2], recvReqs[ind2 + 1], incoming1, localOwnedBuffs[ind2 / 2],
1338  sendReqs[base_ind], sendReqs[base_ind + 1], done,
1339  ( !is_iface && store_remote_handles ? localOwnedBuffs[ind2 / 2] : NULL ),
1340  MB_MESG_REMOTEH_SIZE, &recv_remoteh_reqs[base_ind], &incoming2 );MB_CHK_SET_ERR( result, "Failed to receive buffer" );
1341 
1342  if( done )
1343  {
1344  // If it is done, unpack buffer
1345  remoteOwnedBuffs[ind2 / 2]->reset_ptr( sizeof( int ) );
1346  result = unpack_buffer( remoteOwnedBuffs[ind2 / 2]->buff_ptr, store_remote_handles, from_proc, ind2 / 2,
1347  L1hloc, L1hrem, L1p, L2hloc, L2hrem, L2p, new_ents );MB_CHK_SET_ERR( result, "Failed to unpack buffer in recev_messages" );
1348 
1349  std::copy( new_ents.begin(), new_ents.end(), range_inserter( final_ents ) );
1350 
1351  // Send local handles for new elements to owner
1352  // Reserve space on front for size and for initial buff size
1353  remoteOwnedBuffs[ind2 / 2]->reset_buffer( sizeof( int ) );
1354 
1355  result = pack_remote_handles( L1hloc[ind2 / 2], L1hrem[ind2 / 2], L1p[ind2 / 2], from_proc,
1356  remoteOwnedBuffs[ind2 / 2] );MB_CHK_SET_ERR( result, "Failed to pack remote handles" );
1357  remoteOwnedBuffs[ind2 / 2]->set_stored_size();
1358 
1359  result = send_buffer( buffProcs[ind2 / 2], remoteOwnedBuffs[ind2 / 2], MB_MESG_REMOTEH_SIZE, sendReqs[ind2],
1360  recv_remoteh_reqs[ind2 + 1], (int*)( localOwnedBuffs[ind2 / 2]->mem_ptr ),
1361  //&ackbuff,
1362  incoming2 );MB_CHK_SET_ERR( result, "Failed to send remote handles" );
1363  }
1364  }
1365 
1366  return MB_SUCCESS;
1367 #endif
1368 }

References buffProcs, ErrorCode, get_buffers(), localOwnedBuffs, MB_CHK_SET_ERR, moab::MB_MESG_ENTS_SIZE, moab::MB_MESG_REMOTEH_SIZE, moab::MB_MESG_TAGS_SIZE, MB_SET_ERR, MB_SUCCESS, pack_remote_handles(), PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_rank(), procConfig, recv_buffer(), recvReqs, remoteOwnedBuffs, send_buffer(), sendReqs, and unpack_buffer().

Referenced by recv_entities().

◆ recv_remote_handle_messages()

ErrorCode moab::ParallelComm::recv_remote_handle_messages ( const int  from_proc,
int &  incoming2,
std::vector< EntityHandle > &  L2hloc,
std::vector< EntityHandle > &  L2hrem,
std::vector< unsigned int > &  L2p,
std::vector< MPI_Request > &  recv_remoteh_reqs 
)

Definition at line 1370 of file ParallelComm.cpp.

1376 {
1377 #ifndef MOAB_HAVE_MPI
1378  return MB_FAILURE;
1379 #else
1380  MPI_Status status;
1381  ErrorCode result;
1382  int ind1 = get_buffers( from_proc );
1383  int success, ind2;
1384 
1385  while( incoming2 )
1386  {
1388  success = MPI_Waitany( 2, &recv_remoteh_reqs[2 * ind1], &ind2, &status );
1389  if( MPI_SUCCESS != success )
1390  {
1391  MB_SET_ERR( MB_FAILURE, "Failed in waitany in recv_remote_handle_messages" );
1392  }
1393 
1394  // OK, received something; decrement incoming counter
1395  incoming2--;
1396 
1397  PRINT_DEBUG_RECD( status );
1398 
1399  bool done = false;
1400  ind2 += 2 * ind1;
1401  unsigned int base_ind = 2 * ( ind2 / 2 );
1402  result = recv_buffer( MB_MESG_REMOTEH_SIZE, status, localOwnedBuffs[ind2 / 2], recv_remoteh_reqs[ind2],
1403  recv_remoteh_reqs[ind2 + 1], incoming2, remoteOwnedBuffs[ind2 / 2], sendReqs[base_ind],
1404  sendReqs[base_ind + 1], done );MB_CHK_SET_ERR( result, "Failed to receive remote handles" );
1405  if( done )
1406  {
1407  // Incoming remote handles
1408  localOwnedBuffs[ind2 / 2]->reset_ptr( sizeof( int ) );
1409  result =
1410  unpack_remote_handles( buffProcs[ind2 / 2], localOwnedBuffs[ind2 / 2]->buff_ptr, L2hloc, L2hrem, L2p );MB_CHK_SET_ERR( result, "Failed to unpack remote handles" );
1411  }
1412  }
1413 
1414  return MB_SUCCESS;
1415 #endif
1416 }

References buffProcs, ErrorCode, get_buffers(), localOwnedBuffs, MB_CHK_SET_ERR, moab::MB_MESG_REMOTEH_SIZE, MB_SET_ERR, MB_SUCCESS, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_rank(), procConfig, recv_buffer(), remoteOwnedBuffs, sendReqs, and unpack_remote_handles().

◆ reduce()

template<class T >
ErrorCode moab::ParallelComm::reduce ( const MPI_Op  mpi_op,
int  num_ents,
void *  old_vals,
void *  new_vals 
)
private

Definition at line 3843 of file ParallelComm.cpp.

3844 {
3845  T* old_tmp = reinterpret_cast< T* >( old_vals );
3846  // T *new_tmp = reinterpret_cast<T*>(new_vals);
3847  // new vals pointer needs to be aligned , some compilers will optimize and will shift
3848 
3849  std::vector< T > new_values;
3850  new_values.resize( num_ents );
3851  memcpy( &new_values[0], new_vals, num_ents * sizeof( T ) );
3852  T* new_tmp = &new_values[0];
3853 
3854  if( mpi_op == MPI_SUM )
3855  std::transform( old_tmp, old_tmp + num_ents, new_tmp, new_tmp, ADD< T > );
3856  else if( mpi_op == MPI_PROD )
3857  std::transform( old_tmp, old_tmp + num_ents, new_tmp, new_tmp, MULT< T > );
3858  else if( mpi_op == MPI_MAX )
3859  std::transform( old_tmp, old_tmp + num_ents, new_tmp, new_tmp, MAX< T > );
3860  else if( mpi_op == MPI_MIN )
3861  std::transform( old_tmp, old_tmp + num_ents, new_tmp, new_tmp, MIN< T > );
3862  else if( mpi_op == MPI_LAND )
3863  std::transform( old_tmp, old_tmp + num_ents, new_tmp, new_tmp, LAND< T > );
3864  else if( mpi_op == MPI_LOR )
3865  std::transform( old_tmp, old_tmp + num_ents, new_tmp, new_tmp, LOR< T > );
3866  else if( mpi_op == MPI_LXOR )
3867  std::transform( old_tmp, old_tmp + num_ents, new_tmp, new_tmp, LXOR< T > );
3868  else if( mpi_op == MPI_BAND || mpi_op == MPI_BOR || mpi_op == MPI_BXOR )
3869  {
3870  std::cerr << "Bitwise operations not allowed in tag reductions." << std::endl;
3871  return MB_FAILURE;
3872  }
3873  else if( mpi_op != MPI_OP_NULL )
3874  {
3875  std::cerr << "Unknown MPI operation type." << std::endl;
3876  return MB_TYPE_OUT_OF_RANGE;
3877  }
3878 
3879  // copy now the result back where it should be
3880  memcpy( new_vals, new_tmp, num_ents * sizeof( T ) );
3881  std::vector< T >().swap( new_values ); // way to release allocated vector
3882 
3883  return MB_SUCCESS;
3884 }

References MB_SUCCESS, MB_TYPE_OUT_OF_RANGE, and T.

◆ reduce_tags() [1/3]

ErrorCode moab::ParallelComm::reduce_tags ( const char *  tag_name,
const MPI_Op  mpi_op,
const Range entities 
)
inline

Perform data reduction operation for all shared and ghosted entities Same as std::vector variant except for one tag specified by name.

Parameters
tag_nameName of tag to be reduced
mpi_opOperation type
entitiesEntities on which reduction will be made; if empty, operates on all shared entities

Definition at line 1611 of file ParallelComm.hpp.

1612 {
1613  // get the tag handle
1614  std::vector< Tag > tags( 1 );
1615  ErrorCode result = mbImpl->tag_get_handle( tag_name, 0, MB_TYPE_OPAQUE, tags[0], MB_TAG_ANY );
1616  if( MB_SUCCESS != result )
1617  return result;
1618  else if( !tags[0] )
1619  return MB_TAG_NOT_FOUND;
1620 
1621  return reduce_tags( tags, tags, mpi_op, entities );
1622 }

References entities, ErrorCode, MB_SUCCESS, MB_TAG_ANY, MB_TAG_NOT_FOUND, MB_TYPE_OPAQUE, mbImpl, reduce_tags(), and moab::Interface::tag_get_handle().

◆ reduce_tags() [2/3]

ErrorCode moab::ParallelComm::reduce_tags ( const std::vector< Tag > &  src_tags,
const std::vector< Tag > &  dst_tags,
const MPI_Op  mpi_op,
const Range entities 
)

Perform data reduction operation for all shared and ghosted entities This function should be called collectively over the communicator for this ParallelComm. If this version is called, all ghosted/shared entities should have a value for this tag (or the tag should have a default value). Operation is any MPI_Op, with result stored in destination tag.

Parameters
src_tagsVector of tag handles to be reduced
dst_tagsVector of tag handles in which the answer will be stored
mpi_opOperation type
entitiesEntities on which reduction will be made; if empty, operates on all shared entities

Definition at line 7713 of file ParallelComm.cpp.

7717 {
7718  ErrorCode result;
7719  int success;
7720 
7721  myDebug->tprintf( 1, "Entering reduce_tags\n" );
7722 
7723  // Check that restrictions are met: number of source/dst tags...
7724  if( src_tags.size() != dst_tags.size() )
7725  {
7726  MB_SET_ERR( MB_FAILURE, "Source and destination tag handles must be specified for reduce_tags" );
7727  }
7728 
7729  // ... tag data types
7730  std::vector< Tag >::const_iterator vits, vitd;
7731  int tags_size, tagd_size;
7732  DataType tags_type, tagd_type;
7733  std::vector< unsigned char > vals;
7734  std::vector< int > tags_sizes;
7735  for( vits = src_tags.begin(), vitd = dst_tags.begin(); vits != src_tags.end(); ++vits, ++vitd )
7736  {
7737  // Checks on tag characteristics
7738  result = mbImpl->tag_get_data_type( *vits, tags_type );MB_CHK_SET_ERR( result, "Failed to get src tag data type" );
7739  if( tags_type != MB_TYPE_INTEGER && tags_type != MB_TYPE_DOUBLE && tags_type != MB_TYPE_BIT )
7740  {
7741  MB_SET_ERR( MB_FAILURE, "Src/dst tags must have integer, double, or bit data type" );
7742  }
7743 
7744  result = mbImpl->tag_get_bytes( *vits, tags_size );MB_CHK_SET_ERR( result, "Failed to get src tag bytes" );
7745  vals.resize( tags_size );
7746  result = mbImpl->tag_get_default_value( *vits, &vals[0] );MB_CHK_SET_ERR( result, "Src tag must have default value" );
7747 
7748  tags_sizes.push_back( tags_size );
7749 
7750  // OK, those passed; now check whether dest tags, if specified, agree with src tags
7751  if( *vits == *vitd ) continue;
7752 
7753  result = mbImpl->tag_get_bytes( *vitd, tagd_size );MB_CHK_SET_ERR( result, "Coudln't get dst tag bytes" );
7754  if( tags_size != tagd_size )
7755  {
7756  MB_SET_ERR( MB_FAILURE, "Sizes between src and dst tags don't match" );
7757  }
7758  result = mbImpl->tag_get_data_type( *vitd, tagd_type );MB_CHK_SET_ERR( result, "Coudln't get dst tag data type" );
7759  if( tags_type != tagd_type )
7760  {
7761  MB_SET_ERR( MB_FAILURE, "Src and dst tags must be of same data type" );
7762  }
7763  }
7764 
7765  // Get all procs interfacing to this proc
7766  std::set< unsigned int > exch_procs;
7767  result = get_comm_procs( exch_procs );
7768 
7769  // Post ghost irecv's for all interface procs
7770  // Index requests the same as buffer/sharing procs indices
7771  std::vector< MPI_Request > recv_tag_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7772 
7773  std::vector< unsigned int >::iterator sit;
7774  int ind;
7775 
7777  int incoming = 0;
7778 
7779  for( ind = 0, sit = buffProcs.begin(); sit != buffProcs.end(); ++sit, ind++ )
7780  {
7781  incoming++;
7783  MB_MESG_TAGS_SIZE, incoming );
7784 
7785  success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, *sit,
7786  MB_MESG_TAGS_SIZE, procConfig.proc_comm(), &recv_tag_reqs[3 * ind] );
7787  if( success != MPI_SUCCESS )
7788  {
7789  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in ghost exchange" );
7790  }
7791  }
7792 
7793  // Pack and send tags from this proc to others
7794  // Make sendReqs vector to simplify initialization
7795  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
7796 
7797  // Take all shared entities if incoming list is empty
7798  Range entities;
7799  if( entities_in.empty() )
7800  std::copy( sharedEnts.begin(), sharedEnts.end(), range_inserter( entities ) );
7801  else
7802  entities = entities_in;
7803 
7804  // If the tags are different, copy the source to the dest tag locally
7805  std::vector< Tag >::const_iterator vit = src_tags.begin(), vit2 = dst_tags.begin();
7806  std::vector< int >::const_iterator vsizes = tags_sizes.begin();
7807  for( ; vit != src_tags.end(); ++vit, ++vit2, ++vsizes )
7808  {
7809  if( *vit == *vit2 ) continue;
7810  vals.resize( entities.size() * ( *vsizes ) );
7811  result = mbImpl->tag_get_data( *vit, entities, &vals[0] );MB_CHK_SET_ERR( result, "Didn't get data properly" );
7812  result = mbImpl->tag_set_data( *vit2, entities, &vals[0] );MB_CHK_SET_ERR( result, "Didn't set data properly" );
7813  }
7814 
7815  int dum_ack_buff;
7816 
7817  for( ind = 0, sit = buffProcs.begin(); sit != buffProcs.end(); ++sit, ind++ )
7818  {
7819  Range tag_ents = entities;
7820 
7821  // Get ents shared by proc *sit
7822  result = filter_pstatus( tag_ents, PSTATUS_SHARED, PSTATUS_AND, *sit );MB_CHK_SET_ERR( result, "Failed pstatus AND check" );
7823 
7824  // Pack-send
7825  std::vector< Range > tag_ranges;
7826  for( vit = src_tags.begin(); vit != src_tags.end(); ++vit )
7827  {
7828  const void* ptr;
7829  int sz;
7830  if( mbImpl->tag_get_default_value( *vit, ptr, sz ) != MB_SUCCESS )
7831  {
7832  Range tagged_ents;
7833  mbImpl->get_entities_by_type_and_tag( 0, MBMAXTYPE, &*vit, 0, 1, tagged_ents );
7834  tag_ranges.push_back( intersect( tag_ents, tagged_ents ) );
7835  }
7836  else
7837  tag_ranges.push_back( tag_ents );
7838  }
7839 
7840  // Pack the data
7841  // Reserve space on front for size and for initial buff size
7842  localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
7843 
7844  result = pack_tags( tag_ents, src_tags, dst_tags, tag_ranges, localOwnedBuffs[ind], true, *sit );MB_CHK_SET_ERR( result, "Failed to count buffer in pack_send_tag" );
7845 
7846  // Now send it
7847  result = send_buffer( *sit, localOwnedBuffs[ind], MB_MESG_TAGS_SIZE, sendReqs[3 * ind],
7848  recv_tag_reqs[3 * ind + 2], &dum_ack_buff, incoming );MB_CHK_SET_ERR( result, "Failed to send buffer" );
7849  }
7850 
7851  // Receive/unpack tags
7852  while( incoming )
7853  {
7854  MPI_Status status;
7855  int index_in_recv_requests;
7857  success = MPI_Waitany( 3 * buffProcs.size(), &recv_tag_reqs[0], &index_in_recv_requests, &status );
7858  if( MPI_SUCCESS != success )
7859  {
7860  MB_SET_ERR( MB_FAILURE, "Failed in waitany in ghost exchange" );
7861  }
7862  ind = index_in_recv_requests / 3;
7863 
7864  PRINT_DEBUG_RECD( status );
7865 
7866  // OK, received something; decrement incoming counter
7867  incoming--;
7868 
7869  bool done = false;
7870  std::vector< EntityHandle > dum_vec;
7871  result = recv_buffer( MB_MESG_TAGS_SIZE, status, remoteOwnedBuffs[ind],
7872  recv_tag_reqs[3 * ind + 1], // This is for receiving the second message
7873  recv_tag_reqs[3 * ind + 2], // This would be for ack, but it is not
7874  // used; consider removing it
7875  incoming, localOwnedBuffs[ind],
7876  sendReqs[3 * ind + 1], // Send request for sending the second message
7877  sendReqs[3 * ind + 2], // This is for sending the ack
7878  done );MB_CHK_SET_ERR( result, "Failed to resize recv buffer" );
7879  if( done )
7880  {
7881  remoteOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
7882  result = unpack_tags( remoteOwnedBuffs[ind]->buff_ptr, dum_vec, true, buffProcs[ind], &mpi_op );MB_CHK_SET_ERR( result, "Failed to recv-unpack-tag message" );
7883  }
7884  }
7885 
7886  // OK, now wait
7887  if( myDebug->get_verbosity() == 5 )
7888  {
7889  success = MPI_Barrier( procConfig.proc_comm() );
7890  }
7891  else
7892  {
7893  MPI_Status status[3 * MAX_SHARING_PROCS];
7894  success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], status );
7895  }
7896  if( MPI_SUCCESS != success )
7897  {
7898  MB_SET_ERR( MB_FAILURE, "Failure in waitall in tag exchange" );
7899  }
7900 
7901  myDebug->tprintf( 1, "Exiting reduce_tags" );
7902 
7903  return MB_SUCCESS;
7904 }

References moab::Range::begin(), buffProcs, moab::Range::empty(), entities, ErrorCode, filter_pstatus(), get_comm_procs(), moab::Interface::get_entities_by_type_and_tag(), moab::DebugOutput::get_verbosity(), INITIAL_BUFF_SIZE, moab::intersect(), localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_TAGS_SIZE, MB_SET_ERR, MB_SUCCESS, MB_TYPE_BIT, MB_TYPE_DOUBLE, MB_TYPE_INTEGER, mbImpl, MBMAXTYPE, myDebug, pack_tags(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_AND, PSTATUS_SHARED, recv_buffer(), remoteOwnedBuffs, reset_all_buffers(), send_buffer(), sendReqs, sharedEnts, moab::Interface::tag_get_bytes(), moab::Interface::tag_get_data(), moab::Interface::tag_get_data_type(), moab::Interface::tag_get_default_value(), moab::Interface::tag_set_data(), moab::DebugOutput::tprintf(), and unpack_tags().

Referenced by iMOAB_ReduceTagsMax(), main(), and reduce_tags().

◆ reduce_tags() [3/3]

ErrorCode moab::ParallelComm::reduce_tags ( Tag  tag_handle,
const MPI_Op  mpi_op,
const Range entities 
)
inline

Perform data reduction operation for all shared and ghosted entities Same as std::vector variant except for one tag specified by handle.

Parameters
tag_nameName of tag to be reduced
mpi_opOperation type
entitiesEntities on which reduction will be made; if empty, operates on all shared entities

Definition at line 1624 of file ParallelComm.hpp.

1625 {
1626  // get the tag handle
1627  std::vector< Tag > tags;
1628  tags.push_back( tagh );
1629 
1630  return reduce_tags( tags, tags, mpi_op, entities );
1631 }

References entities, and reduce_tags().

◆ reduce_void()

ErrorCode moab::ParallelComm::reduce_void ( int  tag_data_type,
const MPI_Op  mpi_op,
int  num_ents,
void *  old_vals,
void *  new_vals 
)
private

Definition at line 3886 of file ParallelComm.cpp.

3891 {
3892  ErrorCode result;
3893  switch( tag_data_type )
3894  {
3895  case MB_TYPE_INTEGER:
3896  result = reduce< int >( mpi_op, num_ents, old_vals, new_vals );
3897  break;
3898  case MB_TYPE_DOUBLE:
3899  result = reduce< double >( mpi_op, num_ents, old_vals, new_vals );
3900  break;
3901  case MB_TYPE_BIT:
3902  result = reduce< unsigned char >( mpi_op, num_ents, old_vals, new_vals );
3903  break;
3904  default:
3905  result = MB_SUCCESS;
3906  break;
3907  }
3908 
3909  return result;
3910 }

References ErrorCode, MB_SUCCESS, MB_TYPE_BIT, MB_TYPE_DOUBLE, and MB_TYPE_INTEGER.

Referenced by unpack_tags().

◆ remove_entities_part()

ErrorCode moab::ParallelComm::remove_entities_part ( Range entities,
const int  proc 
)
private

remove entities to the input processor part

Definition at line 7310 of file ParallelComm.cpp.

7311 {
7312  EntityHandle part_set;
7313  ErrorCode result = get_part_handle( proc, part_set );MB_CHK_SET_ERR( result, "Failed to get part handle" );
7314 
7315  if( part_set > 0 )
7316  {
7317  result = mbImpl->remove_entities( part_set, entities );MB_CHK_SET_ERR( result, "Failed to remove entities from part set" );
7318  }
7319 
7320  return MB_SUCCESS;
7321 }

References entities, ErrorCode, get_part_handle(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, and moab::Interface::remove_entities().

Referenced by exchange_owned_mesh().

◆ remove_pcomm()

void moab::ParallelComm::remove_pcomm ( ParallelComm pc)
private

remove a pc from the iface instance tag PARALLEL_COMM

Definition at line 400 of file ParallelComm.cpp.

401 {
402  // Remove this pcomm from instance tag
403  std::vector< ParallelComm* > pc_array( MAX_SHARING_PROCS );
404  Tag pc_tag = pcomm_tag( mbImpl, true );
405 
406  const EntityHandle root = 0;
407  ErrorCode result = mbImpl->tag_get_data( pc_tag, &root, 1, (void*)&pc_array[0] );
408  std::vector< ParallelComm* >::iterator pc_it = std::find( pc_array.begin(), pc_array.end(), pc );
409  assert( MB_SUCCESS == result && pc_it != pc_array.end() );
410  // Empty if test to get around compiler warning about unused var
411  if( MB_SUCCESS == result )
412  {
413  }
414 
415  *pc_it = NULL;
416  mbImpl->tag_set_data( pc_tag, &root, 1, (void*)&pc_array[0] );
417 }

References ErrorCode, MAX_SHARING_PROCS, MB_SUCCESS, mbImpl, pcomm_tag(), moab::Interface::tag_get_data(), and moab::Interface::tag_set_data().

Referenced by ~ParallelComm().

◆ reset_all_buffers()

void moab::ParallelComm::reset_all_buffers ( )
inline

reset message buffers to their initial state

Definition at line 1548 of file ParallelComm.hpp.

1549 {
1550  std::vector< Buffer* >::iterator vit;
1551  for( vit = localOwnedBuffs.begin(); vit != localOwnedBuffs.end(); ++vit )
1552  ( *vit )->reset_buffer();
1553  for( vit = remoteOwnedBuffs.begin(); vit != remoteOwnedBuffs.end(); ++vit )
1554  ( *vit )->reset_buffer();
1555 }

References localOwnedBuffs, and remoteOwnedBuffs.

Referenced by exchange_ghost_cells(), exchange_owned_mesh(), exchange_tags(), post_irecv(), reduce_tags(), send_recv_entities(), and settle_intersection_points().

◆ resolve_shared_ents() [1/3]

ErrorCode moab::ParallelComm::resolve_shared_ents ( EntityHandle  this_set,
int  resolve_dim = 3,
int  shared_dim = -1,
const Tag id_tag = 0 
)

Resolve shared entities between processors.

Same as resolve_shared_ents(Range&), except works for all entities in instance of dimension dim.

If shared_dim is input as -1 or not input, a value one less than the maximum dimension of entities is used.

Parameters
dimDimension of entities in the partition
shared_dimMaximum dimension of shared entities to look for

Definition at line 3912 of file ParallelComm.cpp.

3913 {
3914  ErrorCode result;
3915  Range proc_ents;
3916 
3917  // Check for structured mesh, and do it differently if it is
3918  ScdInterface* scdi;
3919  result = mbImpl->query_interface( scdi );
3920  if( scdi )
3921  {
3922  result = scdi->tag_shared_vertices( this, this_set );
3923  if( MB_SUCCESS == result )
3924  {
3925  myDebug->tprintf( 1, "Total number of shared entities = %lu.\n", (unsigned long)sharedEnts.size() );
3926  return result;
3927  }
3928  }
3929 
3930  if( 0 == this_set )
3931  {
3932  // Get the entities in the partition sets
3933  for( Range::iterator rit = partitionSets.begin(); rit != partitionSets.end(); ++rit )
3934  {
3935  Range tmp_ents;
3936  result = mbImpl->get_entities_by_handle( *rit, tmp_ents, true );
3937  if( MB_SUCCESS != result ) return result;
3938  proc_ents.merge( tmp_ents );
3939  }
3940  }
3941  else
3942  {
3943  result = mbImpl->get_entities_by_handle( this_set, proc_ents, true );
3944  if( MB_SUCCESS != result ) return result;
3945  }
3946 
3947  // Resolve dim is maximal dim of entities in proc_ents
3948  if( -1 == resolve_dim )
3949  {
3950  if( !proc_ents.empty() ) resolve_dim = mbImpl->dimension_from_handle( *proc_ents.rbegin() );
3951  }
3952 
3953  // proc_ents should all be of same dimension
3954  if( resolve_dim > shared_dim &&
3955  mbImpl->dimension_from_handle( *proc_ents.rbegin() ) != mbImpl->dimension_from_handle( *proc_ents.begin() ) )
3956  {
3957  Range::iterator lower = proc_ents.lower_bound( CN::TypeDimensionMap[0].first ),
3958  upper = proc_ents.upper_bound( CN::TypeDimensionMap[resolve_dim - 1].second );
3959  proc_ents.erase( lower, upper );
3960  }
3961 
3962  // Must call even if we don't have any entities, to make sure
3963  // collective comm'n works
3964  return resolve_shared_ents( this_set, proc_ents, resolve_dim, shared_dim, NULL, id_tag );
3965 }

References moab::Range::begin(), moab::Interface::dimension_from_handle(), moab::Range::empty(), moab::Range::end(), moab::Range::erase(), ErrorCode, moab::GeomUtil::first(), moab::Interface::get_entities_by_handle(), moab::Range::lower_bound(), MB_SUCCESS, mbImpl, moab::Range::merge(), myDebug, partitionSets, moab::Interface::query_interface(), moab::Range::rbegin(), resolve_shared_ents(), sharedEnts, moab::ScdInterface::tag_shared_vertices(), moab::DebugOutput::tprintf(), moab::CN::TypeDimensionMap, and moab::Range::upper_bound().

◆ resolve_shared_ents() [2/3]

ErrorCode moab::ParallelComm::resolve_shared_ents ( EntityHandle  this_set,
Range proc_ents,
int  resolve_dim = -1,
int  shared_dim = -1,
Range skin_ents = NULL,
const Tag id_tag = 0 
)

Resolve shared entities between processors.

Resolve shared entities between processors for entities in proc_ents, by comparing global id tag values on vertices on skin of elements in proc_ents. Shared entities are assigned a tag that's either PARALLEL_SHARED_PROC_TAG_NAME, which is 1 integer in length, or PARALLEL_SHARED_PROCS_TAG_NAME, whose length depends on the maximum number of sharing processors. Values in these tags denote the ranks of sharing processors, and the list ends with the value -1.

If shared_dim is input as -1 or not input, a value one less than the maximum dimension of entities in proc_ents is used.

Parameters
proc_entsEntities for which to resolve shared entities
shared_dimMaximum dimension of shared entities to look for

Definition at line 3967 of file ParallelComm.cpp.

3973 {
3974 #ifdef MOAB_HAVE_MPE
3975  if( myDebug->get_verbosity() == 2 )
3976  {
3977  define_mpe();
3978  MPE_Log_event( RESOLVE_START, procConfig.proc_rank(), "Entering resolve_shared_ents." );
3979  }
3980 #endif
3981 
3982  ErrorCode result;
3983  myDebug->tprintf( 1, "Resolving shared entities.\n" );
3984 
3985  if( resolve_dim < shared_dim )
3986  {
3987  MB_SET_ERR( MB_FAILURE, "MOAB does not support vertex-based partitions, only element-based ones" );
3988  }
3989 
3990  if( -1 == shared_dim )
3991  {
3992  if( !proc_ents.empty() )
3993  shared_dim = mbImpl->dimension_from_handle( *proc_ents.begin() ) - 1;
3994  else if( resolve_dim == 3 )
3995  shared_dim = 2;
3996  }
3997  int max_global_resolve_dim = -1;
3998  int err = MPI_Allreduce( &resolve_dim, &max_global_resolve_dim, 1, MPI_INT, MPI_MAX, proc_config().proc_comm() );
3999  if( MPI_SUCCESS != err )
4000  {
4001  MB_SET_ERR( MB_FAILURE, "Unable to guess global resolve_dim" );
4002  }
4003  if( shared_dim < 0 || resolve_dim < 0 )
4004  {
4005  // MB_SET_ERR(MB_FAILURE, "Unable to guess shared_dim or resolve_dim");
4006  resolve_dim = max_global_resolve_dim;
4007  shared_dim = resolve_dim - 1;
4008  }
4009 
4010  if( resolve_dim < 0 || shared_dim < 0 ) return MB_SUCCESS;
4011  // no task has any mesh, get out
4012 
4013  // Get the skin entities by dimension
4014  Range tmp_skin_ents[4];
4015 
4016  // Get the entities to be skinned
4017  // Find the skin
4018  int skin_dim = resolve_dim - 1;
4019  if( !skin_ents )
4020  {
4021  skin_ents = tmp_skin_ents;
4022  skin_ents[resolve_dim] = proc_ents;
4023  Skinner skinner( mbImpl );
4024  result =
4025  skinner.find_skin( this_set, skin_ents[skin_dim + 1], false, skin_ents[skin_dim], NULL, true, true, true );MB_CHK_SET_ERR( result, "Failed to find skin" );
4026  myDebug->tprintf( 1, "Found skin: skin_dim: %d resolve_dim: %d , now resolving.\n", skin_dim, resolve_dim );
4027  myDebug->tprintf( 3, "skin_ents[0].size(): %d skin_ents[1].size(): %d \n", (int)skin_ents[0].size(),
4028  (int)skin_ents[1].size() );
4029  // Get entities adjacent to skin ents from shared_dim down to zero
4030  for( int this_dim = skin_dim - 1; this_dim >= 0; this_dim-- )
4031  {
4032  result =
4033  mbImpl->get_adjacencies( skin_ents[skin_dim], this_dim, true, skin_ents[this_dim], Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get skin adjacencies" );
4034 
4035  if( this_set && skin_dim == 2 && this_dim == 1 )
4036  {
4037  result = mbImpl->add_entities( this_set, skin_ents[this_dim] );MB_CHK_ERR( result );
4038  }
4039  }
4040  }
4041  else if( skin_ents[resolve_dim].empty() )
4042  skin_ents[resolve_dim] = proc_ents;
4043 
4044  // Global id tag
4045  Tag gid_tag;
4046  if( id_tag )
4047  gid_tag = *id_tag;
4048  else
4049  {
4050  bool tag_created = false;
4051  int def_val = -1;
4053  &def_val, &tag_created );
4054  if( MB_ALREADY_ALLOCATED != result && MB_SUCCESS != result )
4055  {
4056  MB_SET_ERR( result, "Failed to create/get gid tag handle" );
4057  }
4058  else if( tag_created )
4059  {
4060  // Just created it, so we need global ids
4061  result = assign_global_ids( this_set, skin_dim + 1, true, true, true );MB_CHK_SET_ERR( result, "Failed to assign global ids" );
4062  }
4063  }
4064 
4065  DataType tag_type;
4066  result = mbImpl->tag_get_data_type( gid_tag, tag_type );MB_CHK_SET_ERR( result, "Failed to get tag data type" );
4067  int bytes_per_tag;
4068  result = mbImpl->tag_get_bytes( gid_tag, bytes_per_tag );MB_CHK_SET_ERR( result, "Failed to get number of bytes per tag" );
4069  // On 64 bits, long and int are different
4070  // On 32 bits, they are not; if size of long is 8, it is a 64 bit machine (really?)
4071 
4072  // Get gids for skin ents in a vector, to pass to gs
4073  std::vector< long > lgid_data( skin_ents[0].size() );
4074  // Size is either long or int
4075  // On 64 bit is 8 or 4
4076  if( sizeof( long ) == bytes_per_tag && ( ( MB_TYPE_HANDLE == tag_type ) || ( MB_TYPE_OPAQUE == tag_type ) ) )
4077  { // It is a special id tag
4078  result = mbImpl->tag_get_data( gid_tag, skin_ents[0], &lgid_data[0] );MB_CHK_SET_ERR( result, "Couldn't get gid tag for skin vertices" );
4079  }
4080  else if( 4 == bytes_per_tag )
4081  { // Must be GLOBAL_ID tag or 32 bits ...
4082  std::vector< int > gid_data( lgid_data.size() );
4083  result = mbImpl->tag_get_data( gid_tag, skin_ents[0], &gid_data[0] );MB_CHK_SET_ERR( result, "Failed to get gid tag for skin vertices" );
4084  std::copy( gid_data.begin(), gid_data.end(), lgid_data.begin() );
4085  }
4086  else
4087  {
4088  // Not supported flag
4089  MB_SET_ERR( MB_FAILURE, "Unsupported id tag" );
4090  }
4091 
4092  // Put handles in vector for passing to gs setup
4093  std::vector< Ulong > handle_vec; // Assumes that we can do conversion from Ulong to EntityHandle
4094  std::copy( skin_ents[0].begin(), skin_ents[0].end(), std::back_inserter( handle_vec ) );
4095 
4096 #ifdef MOAB_HAVE_MPE
4097  if( myDebug->get_verbosity() == 2 )
4098  {
4099  MPE_Log_event( SHAREDV_START, procConfig.proc_rank(), "Creating crystal router." );
4100  }
4101 #endif
4102 
4103  // Get a crystal router
4104  gs_data::crystal_data* cd = procConfig.crystal_router();
4105 
4106  /*
4107  // Get total number of entities; will overshoot highest global id, but
4108  // that's OK
4109  int num_total[2] = {0, 0}, num_local[2] = {0, 0};
4110  result = mbImpl->get_number_entities_by_dimension(this_set, 0, num_local);
4111  if (MB_SUCCESS != result)return result;
4112  int failure = MPI_Allreduce(num_local, num_total, 1,
4113  MPI_INT, MPI_SUM, procConfig.proc_comm());
4114  if (failure) {
4115  MB_SET_ERR(MB_FAILURE, "Allreduce for total number of shared ents failed");
4116  }
4117  */
4118  // Call gather-scatter to get shared ids & procs
4119  gs_data* gsd = new gs_data();
4120  // assert(sizeof(ulong_) == sizeof(EntityHandle));
4121  result = gsd->initialize( skin_ents[0].size(), &lgid_data[0], &handle_vec[0], 2, 1, 1, cd );MB_CHK_SET_ERR( result, "Failed to create gs data" );
4122 
4123  // Get shared proc tags
4124  Tag shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag;
4125  result = get_shared_proc_tags( shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag );MB_CHK_SET_ERR( result, "Failed to get shared proc tags" );
4126 
4127  // Load shared verts into a tuple, then sort by index
4128  TupleList shared_verts;
4129  shared_verts.initialize( 2, 0, 1, 0, skin_ents[0].size() * ( MAX_SHARING_PROCS + 1 ) );
4130  shared_verts.enableWriteAccess();
4131 
4132  unsigned int i = 0, j = 0;
4133  for( unsigned int p = 0; p < gsd->nlinfo->_np; p++ )
4134  for( unsigned int np = 0; np < gsd->nlinfo->_nshared[p]; np++ )
4135  {
4136  shared_verts.vi_wr[i++] = gsd->nlinfo->_sh_ind[j];
4137  shared_verts.vi_wr[i++] = gsd->nlinfo->_target[p];
4138  shared_verts.vul_wr[j] = gsd->nlinfo->_ulabels[j];
4139  j++;
4140  shared_verts.inc_n();
4141  }
4142 
4143  myDebug->tprintf( 3, " shared verts size %d \n", (int)shared_verts.get_n() );
4144 
4145  int max_size = skin_ents[0].size() * ( MAX_SHARING_PROCS + 1 );
4146  moab::TupleList::buffer sort_buffer;
4147  sort_buffer.buffer_init( max_size );
4148  shared_verts.sort( 0, &sort_buffer );
4149  sort_buffer.reset();
4150 
4151  // Set sharing procs and handles tags on skin ents
4152  int maxp = -1;
4153  std::vector< int > sharing_procs( MAX_SHARING_PROCS );
4154  std::fill( sharing_procs.begin(), sharing_procs.end(), maxp );
4155  j = 0;
4156  i = 0;
4157 
4158  // Get ents shared by 1 or n procs
4159  std::map< std::vector< int >, std::vector< EntityHandle > > proc_nvecs;
4160  Range proc_verts;
4161  result = mbImpl->get_adjacencies( proc_ents, 0, false, proc_verts, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get proc_verts" );
4162 
4163  myDebug->print( 3, " resolve shared ents: proc verts ", proc_verts );
4164  result = tag_shared_verts( shared_verts, skin_ents, proc_nvecs, proc_verts );MB_CHK_SET_ERR( result, "Failed to tag shared verts" );
4165 
4166 #ifdef MOAB_HAVE_MPE
4167  if( myDebug->get_verbosity() == 2 )
4168  {
4169  MPE_Log_event( SHAREDV_END, procConfig.proc_rank(), "Finished tag_shared_verts." );
4170  }
4171 #endif
4172 
4173  // Get entities shared by 1 or n procs
4174  result = get_proc_nvecs( resolve_dim, shared_dim, skin_ents, proc_nvecs );MB_CHK_SET_ERR( result, "Failed to tag shared entities" );
4175 
4176  shared_verts.reset();
4177 
4178  if( myDebug->get_verbosity() > 0 )
4179  {
4180  for( std::map< std::vector< int >, std::vector< EntityHandle > >::const_iterator mit = proc_nvecs.begin();
4181  mit != proc_nvecs.end(); ++mit )
4182  {
4183  myDebug->tprintf( 1, "Iface: " );
4184  for( std::vector< int >::const_iterator vit = ( mit->first ).begin(); vit != ( mit->first ).end(); ++vit )
4185  myDebug->printf( 1, " %d", *vit );
4186  myDebug->print( 1, "\n" );
4187  }
4188  }
4189 
4190  // Create the sets for each interface; store them as tags on
4191  // the interface instance
4192  Range iface_sets;
4193  result = create_interface_sets( proc_nvecs );MB_CHK_SET_ERR( result, "Failed to create interface sets" );
4194 
4195  // Establish comm procs and buffers for them
4196  std::set< unsigned int > procs;
4197  result = get_interface_procs( procs, true );MB_CHK_SET_ERR( result, "Failed to get interface procs" );
4198 
4199 #ifndef NDEBUG
4200  result = check_all_shared_handles( true );MB_CHK_SET_ERR( result, "Shared handle check failed after interface vertex exchange" );
4201 #endif
4202 
4203  // Resolve shared entity remote handles; implemented in ghost cell exchange
4204  // code because it's so similar
4205  result = exchange_ghost_cells( -1, -1, 0, 0, true, true );MB_CHK_SET_ERR( result, "Failed to resolve shared entity remote handles" );
4206 
4207  // Now build parent/child links for interface sets
4208  result = create_iface_pc_links();MB_CHK_SET_ERR( result, "Failed to create interface parent/child links" );
4209 
4210  gsd->reset();
4211  delete gsd;
4212 
4213 #ifdef MOAB_HAVE_MPE
4214  if( myDebug->get_verbosity() == 2 )
4215  {
4216  MPE_Log_event( RESOLVE_END, procConfig.proc_rank(), "Exiting resolve_shared_ents." );
4217  }
4218 #endif
4219 
4220  // std::ostringstream ent_str;
4221  // ent_str << "mesh." << procConfig.proc_rank() << ".h5m";
4222  // mbImpl->write_mesh(ent_str.str().c_str());
4223 
4224  // Done
4225  return result;
4226 }

References moab::Interface::add_entities(), assign_global_ids(), moab::Range::begin(), check_all_shared_handles(), create_iface_pc_links(), create_interface_sets(), moab::ProcConfig::crystal_router(), define_mpe(), moab::Interface::dimension_from_handle(), moab::Range::empty(), moab::TupleList::enableWriteAccess(), ErrorCode, exchange_ghost_cells(), moab::Skinner::find_skin(), moab::Interface::get_adjacencies(), get_interface_procs(), moab::TupleList::get_n(), get_proc_nvecs(), get_shared_proc_tags(), moab::DebugOutput::get_verbosity(), GLOBAL_ID_TAG_NAME, moab::TupleList::inc_n(), moab::TupleList::initialize(), moab::gs_data::initialize(), MAX_SHARING_PROCS, MB_ALREADY_ALLOCATED, MB_CHK_ERR, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_DENSE, MB_TYPE_HANDLE, MB_TYPE_INTEGER, MB_TYPE_OPAQUE, mbImpl, MPE_Log_event, myDebug, moab::DebugOutput::print(), moab::DebugOutput::printf(), proc_config(), moab::ProcConfig::proc_rank(), procConfig, moab::TupleList::buffer::reset(), moab::TupleList::reset(), moab::gs_data::reset(), moab::Range::size(), size(), moab::TupleList::sort(), moab::Interface::tag_get_bytes(), moab::Interface::tag_get_data(), moab::Interface::tag_get_data_type(), moab::Interface::tag_get_handle(), tag_shared_verts(), moab::DebugOutput::tprintf(), moab::Interface::UNION, moab::TupleList::vi_wr, and moab::TupleList::vul_wr.

Referenced by create_coarse_mesh(), moab::ReadParallel::load_file(), and resolve_shared_ents().

◆ resolve_shared_ents() [3/3]

ErrorCode moab::ParallelComm::resolve_shared_ents ( ParallelComm **  pc,
const unsigned int  np,
EntityHandle  this_set,
const int  to_dim 
)
static

Definition at line 4260 of file ParallelComm.cpp.

4264 {
4265  std::vector< Range > verts( np );
4266  int tot_verts = 0;
4267  unsigned int p, i, j, v;
4268  ErrorCode rval;
4269  for( p = 0; p < np; p++ )
4270  {
4271  Skinner skinner( pc[p]->get_moab() );
4272  Range part_ents, skin_ents;
4273  rval = pc[p]->get_moab()->get_entities_by_dimension( this_set, part_dim, part_ents );
4274  if( MB_SUCCESS != rval ) return rval;
4275  rval = skinner.find_skin( this_set, part_ents, false, skin_ents, 0, true, true, true );
4276  if( MB_SUCCESS != rval ) return rval;
4277  rval = pc[p]->get_moab()->get_adjacencies( skin_ents, 0, true, verts[p], Interface::UNION );
4278  if( MB_SUCCESS != rval ) return rval;
4279  tot_verts += verts[p].size();
4280  }
4281 
4282  TupleList shared_ents;
4283  shared_ents.initialize( 2, 0, 1, 0, tot_verts );
4284  shared_ents.enableWriteAccess();
4285 
4286  i = 0;
4287  j = 0;
4288  std::vector< int > gids;
4289  Range::iterator rit;
4290  Tag gid_tag;
4291  for( p = 0; p < np; p++ )
4292  {
4293  gid_tag = pc[p]->get_moab()->globalId_tag();
4294 
4295  gids.resize( verts[p].size() );
4296  rval = pc[p]->get_moab()->tag_get_data( gid_tag, verts[p], &gids[0] );
4297  if( MB_SUCCESS != rval ) return rval;
4298 
4299  for( v = 0, rit = verts[p].begin(); v < gids.size(); v++, ++rit )
4300  {
4301  shared_ents.vi_wr[i++] = gids[v];
4302  shared_ents.vi_wr[i++] = p;
4303  shared_ents.vul_wr[j] = *rit;
4304  j++;
4305  shared_ents.inc_n();
4306  }
4307  }
4308 
4309  moab::TupleList::buffer sort_buffer;
4310  sort_buffer.buffer_init( tot_verts );
4311  shared_ents.sort( 0, &sort_buffer );
4312  sort_buffer.reset();
4313 
4314  j = 0;
4315  i = 0;
4316  std::vector< EntityHandle > handles;
4317  std::vector< int > procs;
4318 
4319  while( i < shared_ents.get_n() )
4320  {
4321  handles.clear();
4322  procs.clear();
4323 
4324  // Count & accumulate sharing procs
4325  int this_gid = shared_ents.vi_rd[j];
4326  while( i < shared_ents.get_n() && shared_ents.vi_rd[j] == this_gid )
4327  {
4328  j++;
4329  procs.push_back( shared_ents.vi_rd[j++] );
4330  handles.push_back( shared_ents.vul_rd[i++] );
4331  }
4332  if( 1 == procs.size() ) continue;
4333 
4334  for( v = 0; v < procs.size(); v++ )
4335  {
4336  rval = pc[procs[v]]->update_remote_data( handles[v], &procs[0], &handles[0], procs.size(),
4337  ( procs[0] == (int)pc[procs[v]]->rank()
4340  if( MB_SUCCESS != rval ) return rval;
4341  }
4342  }
4343 
4344  std::set< unsigned int > psets;
4345  for( p = 0; p < np; p++ )
4346  {
4347  rval = pc[p]->create_interface_sets( this_set, part_dim, part_dim - 1 );
4348  if( MB_SUCCESS != rval ) return rval;
4349  // Establish comm procs and buffers for them
4350  psets.clear();
4351  rval = pc[p]->get_interface_procs( psets, true );
4352  if( MB_SUCCESS != rval ) return rval;
4353  }
4354 
4355  shared_ents.reset();
4356 
4357  return MB_SUCCESS;
4358 }

References create_interface_sets(), moab::TupleList::enableWriteAccess(), ErrorCode, moab::Skinner::find_skin(), moab::Interface::get_adjacencies(), moab::Interface::get_entities_by_dimension(), get_interface_procs(), get_moab(), moab::TupleList::get_n(), moab::Interface::globalId_tag(), moab::TupleList::inc_n(), moab::TupleList::initialize(), MB_SUCCESS, PSTATUS_INTERFACE, PSTATUS_NOT_OWNED, rank(), moab::TupleList::buffer::reset(), moab::TupleList::reset(), size(), moab::TupleList::sort(), moab::Interface::tag_get_data(), moab::Interface::UNION, update_remote_data(), moab::TupleList::vi_rd, moab::TupleList::vi_wr, moab::TupleList::vul_rd, and moab::TupleList::vul_wr.

◆ resolve_shared_sets() [1/2]

ErrorCode moab::ParallelComm::resolve_shared_sets ( EntityHandle  this_set,
const Tag id_tag = 0 
)

Remove shared sets.

Generates list of candidate sets using from those (directly) contained in passed set and passes them to the other version of resolve_shared_sets.

Parameters
this_setSet directly containing candidate sets (e.g. file set)
id_tagTag containing global IDs for entity sets.

Definition at line 4500 of file ParallelComm.cpp.

4501 {
4502  // Find all sets with any of the following tags:
4503  const char* const shared_set_tag_names[] = { GEOM_DIMENSION_TAG_NAME, MATERIAL_SET_TAG_NAME, DIRICHLET_SET_TAG_NAME,
4505  int num_tags = sizeof( shared_set_tag_names ) / sizeof( shared_set_tag_names[0] );
4506  Range candidate_sets;
4507  ErrorCode result = MB_FAILURE;
4508 
4509  // If we're not given an ID tag to use to globally identify sets,
4510  // then fall back to using known tag values
4511  if( !idtag )
4512  {
4513  Tag gid, tag;
4514  gid = mbImpl->globalId_tag();
4515  if( NULL != gid ) result = mbImpl->tag_get_handle( GEOM_DIMENSION_TAG_NAME, 1, MB_TYPE_INTEGER, tag );
4516  if( MB_SUCCESS == result )
4517  {
4518  for( int d = 0; d < 4; d++ )
4519  {
4520  candidate_sets.clear();
4521  const void* vals[] = { &d };
4522  result = mbImpl->get_entities_by_type_and_tag( file, MBENTITYSET, &tag, vals, 1, candidate_sets );
4523  if( MB_SUCCESS == result ) resolve_shared_sets( candidate_sets, gid );
4524  }
4525  }
4526 
4527  for( int i = 1; i < num_tags; i++ )
4528  {
4529  result = mbImpl->tag_get_handle( shared_set_tag_names[i], 1, MB_TYPE_INTEGER, tag );
4530  if( MB_SUCCESS == result )
4531  {
4532  candidate_sets.clear();
4533  result = mbImpl->get_entities_by_type_and_tag( file, MBENTITYSET, &tag, 0, 1, candidate_sets );
4534  if( MB_SUCCESS == result ) resolve_shared_sets( candidate_sets, tag );
4535  }
4536  }
4537 
4538  return MB_SUCCESS;
4539  }
4540 
4541  for( int i = 0; i < num_tags; i++ )
4542  {
4543  Tag tag;
4544  result = mbImpl->tag_get_handle( shared_set_tag_names[i], 1, MB_TYPE_INTEGER, tag, MB_TAG_ANY );
4545  if( MB_SUCCESS != result ) continue;
4546 
4547  mbImpl->get_entities_by_type_and_tag( file, MBENTITYSET, &tag, 0, 1, candidate_sets, Interface::UNION );
4548  }
4549 
4550  // Find any additional sets that contain shared entities
4551  Range::iterator hint = candidate_sets.begin();
4552  Range all_sets;
4553  mbImpl->get_entities_by_type( file, MBENTITYSET, all_sets );
4554  all_sets = subtract( all_sets, candidate_sets );
4555  Range::iterator it = all_sets.begin();
4556  while( it != all_sets.end() )
4557  {
4558  Range contents;
4559  mbImpl->get_entities_by_handle( *it, contents );
4560  contents.erase( contents.lower_bound( MBENTITYSET ), contents.end() );
4562  if( contents.empty() )
4563  {
4564  ++it;
4565  }
4566  else
4567  {
4568  hint = candidate_sets.insert( hint, *it );
4569  it = all_sets.erase( it );
4570  }
4571  }
4572 
4573  // Find any additionl sets that contain or are parents of potential shared sets
4574  Range prev_list = candidate_sets;
4575  while( !prev_list.empty() )
4576  {
4577  it = all_sets.begin();
4578  Range new_list;
4579  hint = new_list.begin();
4580  while( it != all_sets.end() )
4581  {
4582  Range contents;
4583  mbImpl->get_entities_by_type( *it, MBENTITYSET, contents );
4584  if( !intersect( prev_list, contents ).empty() )
4585  {
4586  hint = new_list.insert( hint, *it );
4587  it = all_sets.erase( it );
4588  }
4589  else
4590  {
4591  new_list.clear();
4592  mbImpl->get_child_meshsets( *it, contents );
4593  if( !intersect( prev_list, contents ).empty() )
4594  {
4595  hint = new_list.insert( hint, *it );
4596  it = all_sets.erase( it );
4597  }
4598  else
4599  {
4600  ++it;
4601  }
4602  }
4603  }
4604 
4605  candidate_sets.merge( new_list );
4606  prev_list.swap( new_list );
4607  }
4608 
4609  return resolve_shared_sets( candidate_sets, *idtag );
4610 }

References moab::Range::begin(), moab::Range::clear(), DIRICHLET_SET_TAG_NAME, moab::Range::empty(), moab::Range::end(), moab::Range::erase(), ErrorCode, filter_pstatus(), GEOM_DIMENSION_TAG_NAME, moab::Interface::get_child_meshsets(), moab::Interface::get_entities_by_handle(), moab::Interface::get_entities_by_type(), moab::Interface::get_entities_by_type_and_tag(), moab::Interface::globalId_tag(), moab::Range::insert(), moab::intersect(), moab::Range::lower_bound(), MATERIAL_SET_TAG_NAME, MB_SUCCESS, MB_TAG_ANY, MB_TYPE_INTEGER, MBENTITYSET, mbImpl, moab::Range::merge(), NEUMANN_SET_TAG_NAME, PARALLEL_PARTITION_TAG_NAME, PSTATUS_OR, PSTATUS_SHARED, moab::subtract(), moab::Range::swap(), moab::Interface::tag_get_handle(), and moab::Interface::UNION.

Referenced by moab::ReadParallel::load_file().

◆ resolve_shared_sets() [2/2]

ErrorCode moab::ParallelComm::resolve_shared_sets ( Range candidate_sets,
Tag  id_tag 
)

Remove shared sets.

Use values of id_tag to match sets across processes and populate sharing data for sets.

Parameters
candidate_setsSets to consider as potentially shared.
id_tagTag containing global IDs for entity sets.

Definition at line 4621 of file ParallelComm.cpp.

4622 {
4623  ErrorCode result;
4624  const unsigned rk = proc_config().proc_rank();
4625  MPI_Comm cm = proc_config().proc_comm();
4626 
4627  // Build sharing list for all sets
4628 
4629  // Get ids for sets in a vector, to pass to gs
4630  std::vector< long > larray; // Allocate sufficient space for longs
4631  std::vector< Ulong > handles;
4632  Range tmp_sets;
4633  // The id tag can be size 4 or size 8
4634  // Based on that, convert to int or to long, similarly to what we do
4635  // for resolving shared vertices;
4636  // This code must work on 32 bit too, where long is 4 bytes, also
4637  // so test first size 4, then we should be fine
4638  DataType tag_type;
4639  result = mbImpl->tag_get_data_type( idtag, tag_type );MB_CHK_SET_ERR( result, "Failed getting tag data type" );
4640  int bytes_per_tag;
4641  result = mbImpl->tag_get_bytes( idtag, bytes_per_tag );MB_CHK_SET_ERR( result, "Failed getting number of bytes per tag" );
4642  // On 64 bits, long and int are different
4643  // On 32 bits, they are not; if size of long is 8, it is a 64 bit machine (really?)
4644 
4645  for( Range::iterator rit = sets.begin(); rit != sets.end(); ++rit )
4646  {
4647  if( sizeof( long ) == bytes_per_tag && ( ( MB_TYPE_HANDLE == tag_type ) || ( MB_TYPE_OPAQUE == tag_type ) ) )
4648  { // It is a special id tag
4649  long dum;
4650  result = mbImpl->tag_get_data( idtag, &( *rit ), 1, &dum );
4651  if( MB_SUCCESS == result )
4652  {
4653  larray.push_back( dum );
4654  handles.push_back( *rit );
4655  tmp_sets.insert( tmp_sets.end(), *rit );
4656  }
4657  }
4658  else if( 4 == bytes_per_tag )
4659  { // Must be GLOBAL_ID tag or MATERIAL_ID, etc
4660  int dum;
4661  result = mbImpl->tag_get_data( idtag, &( *rit ), 1, &dum );
4662  if( MB_SUCCESS == result )
4663  {
4664  larray.push_back( dum );
4665  handles.push_back( *rit );
4666  tmp_sets.insert( tmp_sets.end(), *rit );
4667  }
4668  }
4669  }
4670 
4671  const size_t nsets = handles.size();
4672 
4673  // Get handle array for sets
4674  // This is not true on windows machine, 64 bits: entity handle is 64 bit, long is 32
4675  // assert(sizeof(EntityHandle) <= sizeof(unsigned long));
4676 
4677  // Do communication of data
4678  gs_data::crystal_data* cd = procConfig.crystal_router();
4679  gs_data* gsd = new gs_data();
4680  result = gsd->initialize( nsets, &larray[0], &handles[0], 2, 1, 1, cd );MB_CHK_SET_ERR( result, "Failed to create gs data" );
4681 
4682  // Convert from global IDs grouped by process rank to list
4683  // of <idx, rank> pairs so that we can sort primarily
4684  // by idx and secondarily by rank (we want lists of procs for each
4685  // idx, not lists if indices for each proc).
4686  size_t ntuple = 0;
4687  for( unsigned p = 0; p < gsd->nlinfo->_np; p++ )
4688  ntuple += gsd->nlinfo->_nshared[p];
4689  std::vector< set_tuple > tuples;
4690  tuples.reserve( ntuple );
4691  size_t j = 0;
4692  for( unsigned p = 0; p < gsd->nlinfo->_np; p++ )
4693  {
4694  for( unsigned np = 0; np < gsd->nlinfo->_nshared[p]; np++ )
4695  {
4696  set_tuple t;
4697  t.idx = gsd->nlinfo->_sh_ind[j];
4698  t.proc = gsd->nlinfo->_target[p];
4699  t.handle = gsd->nlinfo->_ulabels[j];
4700  tuples.push_back( t );
4701  j++;
4702  }
4703  }
4704  std::sort( tuples.begin(), tuples.end() );
4705 
4706  // Release crystal router stuff
4707  gsd->reset();
4708  delete gsd;
4709 
4710  // Storing sharing data for each set
4711  size_t ti = 0;
4712  unsigned idx = 0;
4713  std::vector< unsigned > procs;
4714  Range::iterator si = tmp_sets.begin();
4715  while( si != tmp_sets.end() && ti < tuples.size() )
4716  {
4717  assert( idx <= tuples[ti].idx );
4718  if( idx < tuples[ti].idx ) si += ( tuples[ti].idx - idx );
4719  idx = tuples[ti].idx;
4720 
4721  procs.clear();
4722  size_t ti_init = ti;
4723  while( ti < tuples.size() && tuples[ti].idx == idx )
4724  {
4725  procs.push_back( tuples[ti].proc );
4726  ++ti;
4727  }
4728  assert( is_sorted_unique( procs ) );
4729 
4730  result = sharedSetData->set_sharing_procs( *si, procs );
4731  if( MB_SUCCESS != result )
4732  {
4733  std::cerr << "Failure at " __FILE__ ":" << __LINE__ << std::endl;
4734  std::cerr.flush();
4735  MPI_Abort( cm, 1 );
4736  }
4737 
4738  // Add this proc to list of sharing procs in correct position
4739  // so that all procs select owner based on same list
4740  std::vector< unsigned >::iterator it = std::lower_bound( procs.begin(), procs.end(), rk );
4741  assert( it == procs.end() || *it > rk );
4742  procs.insert( it, rk );
4743  size_t owner_idx = choose_owner_idx( procs );
4744  EntityHandle owner_handle;
4745  if( procs[owner_idx] == rk )
4746  owner_handle = *si;
4747  else if( procs[owner_idx] > rk )
4748  owner_handle = tuples[ti_init + owner_idx - 1].handle;
4749  else
4750  owner_handle = tuples[ti_init + owner_idx].handle;
4751  result = sharedSetData->set_owner( *si, procs[owner_idx], owner_handle );
4752  if( MB_SUCCESS != result )
4753  {
4754  std::cerr << "Failure at " __FILE__ ":" << __LINE__ << std::endl;
4755  std::cerr.flush();
4756  MPI_Abort( cm, 1 );
4757  }
4758 
4759  ++si;
4760  ++idx;
4761  }
4762 
4763  return MB_SUCCESS;
4764 }

References moab::Range::begin(), moab::choose_owner_idx(), moab::ProcConfig::crystal_router(), moab::dum, moab::Range::end(), ErrorCode, moab::gs_data::initialize(), moab::Range::insert(), moab::is_sorted_unique(), MB_CHK_SET_ERR, MB_SUCCESS, MB_TYPE_HANDLE, MB_TYPE_OPAQUE, mbImpl, moab::ProcConfig::proc_comm(), proc_config(), moab::ProcConfig::proc_rank(), procConfig, moab::gs_data::reset(), moab::SharedSetData::set_owner(), moab::SharedSetData::set_sharing_procs(), sharedSetData, t, moab::Interface::tag_get_bytes(), moab::Interface::tag_get_data(), and moab::Interface::tag_get_data_type().

◆ scatter_entities()

ErrorCode moab::ParallelComm::scatter_entities ( const int  from_proc,
std::vector< Range > &  entities,
const bool  adjacencies = false,
const bool  tags = true 
)

Scatter entities on from_proc to other processors This function assumes remote handles are not being stored, since (usually) every processor will know about the whole mesh.

Parameters
from_procProcessor having the mesh to be broadcast
entitiesOn return, the entities sent or received in this call
adjacenciesIf true, adjacencies are sent for equiv entities (currently unsupported)
tagsIf true, all non-default-valued tags are sent for sent entities

Definition at line 601 of file ParallelComm.cpp.

605 {
606 #ifndef MOAB_HAVE_MPI
607  return MB_FAILURE;
608 #else
609  ErrorCode result = MB_SUCCESS;
610  int i, success, buff_size, prev_size;
611  int nProcs = (int)procConfig.proc_size();
612  int* sendCounts = new int[nProcs];
613  int* displacements = new int[nProcs];
614  sendCounts[0] = sizeof( int );
615  displacements[0] = 0;
616  Buffer buff( INITIAL_BUFF_SIZE );
617  buff.reset_ptr( sizeof( int ) );
618  buff.set_stored_size();
619  unsigned int my_proc = procConfig.proc_rank();
620 
621  // Get buffer size array for each remote processor
622  if( my_proc == (unsigned int)from_proc )
623  {
624  for( i = 1; i < nProcs; i++ )
625  {
626  prev_size = buff.buff_ptr - buff.mem_ptr;
627  buff.reset_ptr( prev_size + sizeof( int ) );
628  result = add_verts( entities[i] );MB_CHK_SET_ERR( result, "Failed to add verts" );
629 
630  result = pack_buffer( entities[i], adjacencies, tags, false, -1, &buff );
631  if( MB_SUCCESS != result )
632  {
633  delete[] sendCounts;
634  delete[] displacements;
635  MB_SET_ERR( result, "Failed to pack buffer in scatter_entities" );
636  }
637 
638  buff_size = buff.buff_ptr - buff.mem_ptr - prev_size;
639  *( (int*)( buff.mem_ptr + prev_size ) ) = buff_size;
640  sendCounts[i] = buff_size;
641  }
642  }
643 
644  // Broadcast buffer size array
645  success = MPI_Bcast( sendCounts, nProcs, MPI_INT, from_proc, procConfig.proc_comm() );
646  if( MPI_SUCCESS != success )
647  {
648  delete[] sendCounts;
649  delete[] displacements;
650  MB_SET_ERR( MB_FAILURE, "MPI_Bcast of buffer size failed" );
651  }
652 
653  for( i = 1; i < nProcs; i++ )
654  {
655  displacements[i] = displacements[i - 1] + sendCounts[i - 1];
656  }
657 
658  Buffer rec_buff;
659  rec_buff.reserve( sendCounts[my_proc] );
660 
661  // Scatter actual geometry
662  success = MPI_Scatterv( buff.mem_ptr, sendCounts, displacements, MPI_UNSIGNED_CHAR, rec_buff.mem_ptr,
663  sendCounts[my_proc], MPI_UNSIGNED_CHAR, from_proc, procConfig.proc_comm() );
664 
665  if( MPI_SUCCESS != success )
666  {
667  delete[] sendCounts;
668  delete[] displacements;
669  MB_SET_ERR( MB_FAILURE, "MPI_Scatterv of buffer failed" );
670  }
671 
672  // Unpack in remote processors
673  if( my_proc != (unsigned int)from_proc )
674  {
675  std::vector< std::vector< EntityHandle > > dum1a, dum1b;
676  std::vector< std::vector< int > > dum1p;
677  std::vector< EntityHandle > dum2, dum4;
678  std::vector< unsigned int > dum3;
679  rec_buff.reset_ptr( sizeof( int ) );
680  result = unpack_buffer( rec_buff.buff_ptr, false, from_proc, -1, dum1a, dum1b, dum1p, dum2, dum2, dum3, dum4 );
681  if( MB_SUCCESS != result )
682  {
683  delete[] sendCounts;
684  delete[] displacements;
685  MB_SET_ERR( result, "Failed to unpack buffer in scatter_entities" );
686  }
687 
688  std::copy( dum4.begin(), dum4.end(), range_inserter( entities[my_proc] ) );
689  }
690 
691  delete[] sendCounts;
692  delete[] displacements;
693 
694  return MB_SUCCESS;
695 #endif
696 }

References add_verts(), moab::ParallelComm::Buffer::buff_ptr, entities, ErrorCode, INITIAL_BUFF_SIZE, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, moab::ParallelComm::Buffer::mem_ptr, pack_buffer(), moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), moab::ProcConfig::proc_size(), procConfig, moab::ParallelComm::Buffer::reserve(), moab::ParallelComm::Buffer::reset_ptr(), moab::ParallelComm::Buffer::set_stored_size(), and unpack_buffer().

◆ send_buffer()

ErrorCode moab::ParallelComm::send_buffer ( const unsigned int  to_proc,
Buffer send_buff,
const int  msg_tag,
MPI_Request &  send_req,
MPI_Request &  ack_recv_req,
int *  ack_buff,
int &  this_incoming,
int  next_mesg_tag = -1,
Buffer next_recv_buff = NULL,
MPI_Request *  next_recv_req = NULL,
int *  next_incoming = NULL 
)
private

send the indicated buffer, possibly sending size first

Definition at line 6086 of file ParallelComm.cpp.

6097 {
6098  ErrorCode result = MB_SUCCESS;
6099  int success;
6100 
6101  // If small message, post recv for remote handle message
6102  if( send_buff->get_stored_size() <= (int)INITIAL_BUFF_SIZE && next_recv_buff )
6103  {
6104  ( *next_incoming )++;
6105  PRINT_DEBUG_IRECV( procConfig.proc_rank(), to_proc, next_recv_buff->mem_ptr, INITIAL_BUFF_SIZE, next_mesg_tag,
6106  *next_incoming );
6107  success = MPI_Irecv( next_recv_buff->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, to_proc, next_mesg_tag,
6108  procConfig.proc_comm(), next_recv_req );
6109  if( success != MPI_SUCCESS )
6110  {
6111  MB_SET_ERR( MB_FAILURE, "Failed to post irecv for next message in ghost exchange" );
6112  }
6113  }
6114  // If large, we'll need an ack before sending the rest
6115  else if( send_buff->get_stored_size() > (int)INITIAL_BUFF_SIZE )
6116  {
6117  this_incoming++;
6118  PRINT_DEBUG_IRECV( procConfig.proc_rank(), to_proc, (unsigned char*)ack_buff, sizeof( int ), mesg_tag - 1,
6119  this_incoming );
6120  success = MPI_Irecv( (void*)ack_buff, sizeof( int ), MPI_UNSIGNED_CHAR, to_proc, mesg_tag - 1,
6121  procConfig.proc_comm(), &ack_req );
6122  if( success != MPI_SUCCESS )
6123  {
6124  MB_SET_ERR( MB_FAILURE, "Failed to post irecv for entity ack in ghost exchange" );
6125  }
6126  }
6127 
6128  // Send the buffer
6129  PRINT_DEBUG_ISEND( procConfig.proc_rank(), to_proc, send_buff->mem_ptr, mesg_tag,
6130  std::min( send_buff->get_stored_size(), (int)INITIAL_BUFF_SIZE ) );
6131  assert( 0 <= send_buff->get_stored_size() && send_buff->get_stored_size() <= (int)send_buff->alloc_size );
6132  success = MPI_Isend( send_buff->mem_ptr, std::min( send_buff->get_stored_size(), (int)INITIAL_BUFF_SIZE ),
6133  MPI_UNSIGNED_CHAR, to_proc, mesg_tag, procConfig.proc_comm(), &send_req );
6134  if( success != MPI_SUCCESS ) return MB_FAILURE;
6135 
6136  return result;
6137 }

References moab::ParallelComm::Buffer::alloc_size, ErrorCode, moab::ParallelComm::Buffer::get_stored_size(), INITIAL_BUFF_SIZE, MB_SET_ERR, MB_SUCCESS, moab::ParallelComm::Buffer::mem_ptr, PRINT_DEBUG_IRECV, PRINT_DEBUG_ISEND, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), and procConfig.

Referenced by exchange_ghost_cells(), exchange_owned_mesh(), exchange_tags(), recv_entities(), recv_messages(), reduce_tags(), send_entities(), send_recv_entities(), and settle_intersection_points().

◆ send_entities() [1/2]

ErrorCode moab::ParallelComm::send_entities ( const int  to_proc,
Range orig_ents,
const bool  adjs,
const bool  tags,
const bool  store_remote_handles,
const bool  is_iface,
Range final_ents,
int &  incoming1,
int &  incoming2,
TupleList entprocs,
std::vector< MPI_Request > &  recv_remoteh_reqs,
bool  wait_all = true 
)

send entities to another processor, optionally waiting until it's done

Send entities to another processor, with adjs, sets, and tags. If store_remote_handles is true, this call receives back handles assigned to entities sent to destination processor and stores them in sharedh_tag or sharedhs_tag.

Parameters
to_procDestination processor
orig_entsEntities requested to send
adjsIf true, send adjacencies for equiv entities (currently unsupported)
tagsIf true, send tag values for all tags assigned to entities
store_remote_handlesIf true, also recv message with handles on destination processor (currently unsupported)
final_entsRange containing all entities sent
incomingkeep track if any messages are coming to this processor (newly added)
wait_allIf true, wait until all messages received/sent complete

Definition at line 698 of file ParallelComm.cpp.

710 {
711 #ifndef MOAB_HAVE_MPI
712  return MB_FAILURE;
713 #else
714  // Pack entities to local buffer
715  int ind = get_buffers( to_proc );
716  localOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
717 
718  // Add vertices
719  ErrorCode result = add_verts( orig_ents );MB_CHK_SET_ERR( result, "Failed to add verts in send_entities" );
720 
721  // Filter out entities already shared with destination
722  Range tmp_range;
723  result = filter_pstatus( orig_ents, PSTATUS_SHARED, PSTATUS_AND, to_proc, &tmp_range );MB_CHK_SET_ERR( result, "Failed to filter on owner" );
724  if( !tmp_range.empty() )
725  {
726  orig_ents = subtract( orig_ents, tmp_range );
727  }
728 
729  result = pack_buffer( orig_ents, adjs, tags, store_remote_handles, to_proc, localOwnedBuffs[ind], &entprocs );MB_CHK_SET_ERR( result, "Failed to pack buffer in send_entities" );
730 
731  // Send buffer
732  result = send_buffer( to_proc, localOwnedBuffs[ind], MB_MESG_ENTS_SIZE, sendReqs[2 * ind], recvReqs[2 * ind + 1],
733  (int*)( remoteOwnedBuffs[ind]->mem_ptr ),
734  //&ackbuff,
735  incoming1, MB_MESG_REMOTEH_SIZE,
736  ( !is_iface && store_remote_handles ? localOwnedBuffs[ind] : NULL ),
737  &recv_remoteh_reqs[2 * ind], &incoming2 );MB_CHK_SET_ERR( result, "Failed to send buffer" );
738 
739  return MB_SUCCESS;
740 #endif
741 }

References add_verts(), moab::Range::empty(), ErrorCode, filter_pstatus(), get_buffers(), localOwnedBuffs, MB_CHK_SET_ERR, moab::MB_MESG_ENTS_SIZE, moab::MB_MESG_REMOTEH_SIZE, MB_SUCCESS, pack_buffer(), PSTATUS_AND, PSTATUS_SHARED, recvReqs, remoteOwnedBuffs, send_buffer(), sendReqs, and moab::subtract().

◆ send_entities() [2/2]

ErrorCode moab::ParallelComm::send_entities ( std::vector< unsigned int > &  send_procs,
std::vector< Range * > &  send_ents,
int &  incoming1,
int &  incoming2,
const bool  store_remote_handles 
)

Definition at line 743 of file ParallelComm.cpp.

748 {
749 #ifdef MOAB_HAVE_MPE
750  if( myDebug->get_verbosity() == 2 )
751  {
752  MPE_Log_event( OWNED_START, procConfig.proc_rank(), "Starting send_entities." );
753  }
754 #endif
755  myDebug->tprintf( 1, "Entering send_entities\n" );
756  if( myDebug->get_verbosity() == 4 )
757  {
758  msgs.clear();
759  msgs.reserve( MAX_SHARING_PROCS );
760  }
761 
762  unsigned int i;
763  int ind;
764  ErrorCode result = MB_SUCCESS;
765 
766  // Set buffProcs with communicating procs
767  unsigned int n_proc = send_procs.size();
768  for( i = 0; i < n_proc; i++ )
769  {
770  ind = get_buffers( send_procs[i] );
771  result = add_verts( *send_ents[i] );MB_CHK_SET_ERR( result, "Failed to add verts" );
772 
773  // Filter out entities already shared with destination
774  Range tmp_range;
775  result = filter_pstatus( *send_ents[i], PSTATUS_SHARED, PSTATUS_AND, buffProcs[ind], &tmp_range );MB_CHK_SET_ERR( result, "Failed to filter on owner" );
776  if( !tmp_range.empty() )
777  {
778  *send_ents[i] = subtract( *send_ents[i], tmp_range );
779  }
780  }
781 
782  //===========================================
783  // Get entities to be sent to neighbors
784  // Need to get procs each entity is sent to
785  //===========================================
786  Range allsent, tmp_range;
787  int npairs = 0;
788  TupleList entprocs;
789  for( i = 0; i < n_proc; i++ )
790  {
791  int n_ents = send_ents[i]->size();
792  if( n_ents > 0 )
793  {
794  npairs += n_ents; // Get the total # of proc/handle pairs
795  allsent.merge( *send_ents[i] );
796  }
797  }
798 
799  // Allocate a TupleList of that size
800  entprocs.initialize( 1, 0, 1, 0, npairs );
801  entprocs.enableWriteAccess();
802 
803  // Put the proc/handle pairs in the list
804  for( i = 0; i < n_proc; i++ )
805  {
806  for( Range::iterator rit = send_ents[i]->begin(); rit != send_ents[i]->end(); ++rit )
807  {
808  entprocs.vi_wr[entprocs.get_n()] = send_procs[i];
809  entprocs.vul_wr[entprocs.get_n()] = *rit;
810  entprocs.inc_n();
811  }
812  }
813 
814  // Sort by handle
815  moab::TupleList::buffer sort_buffer;
816  sort_buffer.buffer_init( npairs );
817  entprocs.sort( 1, &sort_buffer );
818  entprocs.disableWriteAccess();
819  sort_buffer.reset();
820 
821  myDebug->tprintf( 1, "allsent ents compactness (size) = %f (%lu)\n", allsent.compactness(),
822  (unsigned long)allsent.size() );
823 
824  //===========================================
825  // Pack and send ents from this proc to others
826  //===========================================
827  for( i = 0; i < n_proc; i++ )
828  {
829  if( send_ents[i]->size() > 0 )
830  {
831  ind = get_buffers( send_procs[i] );
832  myDebug->tprintf( 1, "Sent ents compactness (size) = %f (%lu)\n", send_ents[i]->compactness(),
833  (unsigned long)send_ents[i]->size() );
834  // Reserve space on front for size and for initial buff size
835  localOwnedBuffs[ind]->reset_buffer( sizeof( int ) );
836  result = pack_buffer( *send_ents[i], false, true, store_remote_handles, buffProcs[ind],
837  localOwnedBuffs[ind], &entprocs, &allsent );
838 
839  if( myDebug->get_verbosity() == 4 )
840  {
841  msgs.resize( msgs.size() + 1 );
842  msgs.back() = new Buffer( *localOwnedBuffs[ind] );
843  }
844 
845  // Send the buffer (size stored in front in send_buffer)
846  result = send_buffer( send_procs[i], localOwnedBuffs[ind], MB_MESG_ENTS_SIZE, sendReqs[2 * ind],
847  recvReqs[2 * ind + 1], &ackbuff, incoming1, MB_MESG_REMOTEH_SIZE,
848  ( store_remote_handles ? localOwnedBuffs[ind] : NULL ), &recvRemotehReqs[2 * ind],
849  &incoming2 );MB_CHK_SET_ERR( result, "Failed to Isend in ghost send" );
850  }
851  }
852  entprocs.reset();
853 
854 #ifdef MOAB_HAVE_MPE
855  if( myDebug->get_verbosity() == 2 )
856  {
857  MPE_Log_event( ENTITIES_END, procConfig.proc_rank(), "Ending send_entities." );
858  }
859 #endif
860 
861  return MB_SUCCESS;
862 }

References ackbuff, add_verts(), buffProcs, moab::Range::compactness(), moab::TupleList::disableWriteAccess(), moab::Range::empty(), moab::TupleList::enableWriteAccess(), ErrorCode, filter_pstatus(), get_buffers(), moab::TupleList::get_n(), moab::DebugOutput::get_verbosity(), moab::TupleList::inc_n(), moab::TupleList::initialize(), localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_ENTS_SIZE, moab::MB_MESG_REMOTEH_SIZE, MB_SUCCESS, moab::Range::merge(), MPE_Log_event, moab::msgs, myDebug, pack_buffer(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_AND, PSTATUS_SHARED, recvRemotehReqs, recvReqs, moab::TupleList::buffer::reset(), moab::TupleList::reset(), send_buffer(), sendReqs, moab::Range::size(), size(), moab::TupleList::sort(), moab::subtract(), moab::DebugOutput::tprintf(), moab::TupleList::vi_wr, and moab::TupleList::vul_wr.

◆ send_recv_entities()

ErrorCode moab::ParallelComm::send_recv_entities ( std::vector< int > &  send_procs,
std::vector< std::vector< int > > &  msgsizes,
std::vector< std::vector< EntityHandle > > &  senddata,
std::vector< std::vector< EntityHandle > > &  recvdata 
)

Send and receives data from a set of processors.

Definition at line 873 of file ParallelComm.cpp.

877 {
878 #ifdef USE_MPE
879  if( myDebug->get_verbosity() == 2 )
880  {
881  MPE_Log_event( OWNED_START, procConfig.proc_rank(), "Starting send_recv_entities." );
882  }
883 #endif
884  myDebug->tprintf( 1, "Entering send_recv_entities\n" );
885  if( myDebug->get_verbosity() == 4 )
886  {
887  msgs.clear();
888  msgs.reserve( MAX_SHARING_PROCS );
889  }
890 
891  // unsigned int i;
892  int i, ind, success;
894 
895  //===========================================
896  // Pack and send ents from this proc to others
897  //===========================================
898 
899  // std::cout<<"resetting all buffers"<<std::endl;
900 
902  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
903  std::vector< MPI_Request > recv_ent_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL );
904  int ack_buff;
905  int incoming = 0;
906 
907  std::vector< unsigned int >::iterator sit;
908 
909  for( ind = 0, sit = buffProcs.begin(); sit != buffProcs.end(); ++sit, ind++ )
910  {
911  incoming++;
913  MB_MESG_ENTS_SIZE, incoming );
914 
915  success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, *sit,
916  MB_MESG_ENTS_SIZE, procConfig.proc_comm(), &recv_ent_reqs[3 * ind] );
917  if( success != MPI_SUCCESS )
918  {
919  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in send_recv_entities" );
920  }
921  }
922 
923  // std::set<unsigned int>::iterator it;
924  for( i = 0; i < (int)send_procs.size(); i++ )
925  {
926  // Get index of the shared processor in the local buffer
927  ind = get_buffers( send_procs[i] );
928  localOwnedBuffs[ind]->reset_buffer( sizeof( int ) );
929 
930  int buff_size = msgsizes[i].size() * sizeof( int ) + senddata[i].size() * sizeof( EntityHandle );
931  localOwnedBuffs[ind]->check_space( buff_size );
932 
933  // Pack entities
934  std::vector< int > msg;
935  msg.insert( msg.end(), msgsizes[i].begin(), msgsizes[i].end() );
936  PACK_INTS( localOwnedBuffs[ind]->buff_ptr, &msg[0], msg.size() );
937 
938  std::vector< EntityHandle > entities;
939  entities.insert( entities.end(), senddata[i].begin(), senddata[i].end() );
940  PACK_EH( localOwnedBuffs[ind]->buff_ptr, &entities[0], entities.size() );
941  localOwnedBuffs[ind]->set_stored_size();
942 
943  if( myDebug->get_verbosity() == 4 )
944  {
945  msgs.resize( msgs.size() + 1 );
946  msgs.back() = new Buffer( *localOwnedBuffs[ind] );
947  }
948 
949  // Send the buffer (size stored in front in send_buffer)
950  error = send_buffer( send_procs[i], localOwnedBuffs[ind], MB_MESG_ENTS_SIZE, sendReqs[3 * ind],
951  recv_ent_reqs[3 * ind + 2], &ack_buff, incoming );MB_CHK_SET_ERR( error, "Failed to Isend in send_recv_entities" );
952  }
953 
954  //===========================================
955  // Receive and unpack ents from received data
956  //===========================================
957 
958  while( incoming )
959  {
960 
961  MPI_Status status;
962  int index_in_recv_requests;
963 
965  success = MPI_Waitany( 3 * buffProcs.size(), &recv_ent_reqs[0], &index_in_recv_requests, &status );
966  if( MPI_SUCCESS != success )
967  {
968  MB_SET_ERR( MB_FAILURE, "Failed in waitany in send_recv_entities" );
969  }
970 
971  // Processor index in the list is divided by 3
972  ind = index_in_recv_requests / 3;
973 
974  PRINT_DEBUG_RECD( status );
975 
976  // OK, received something; decrement incoming counter
977  incoming--;
978 
979  bool done = false;
980 
982  recv_ent_reqs[3 * ind + 1], // This is for receiving the second message
983  recv_ent_reqs[3 * ind + 2], // This would be for ack, but it is not
984  // used; consider removing it
985  incoming, localOwnedBuffs[ind],
986  sendReqs[3 * ind + 1], // Send request for sending the second message
987  sendReqs[3 * ind + 2], // This is for sending the ack
988  done );MB_CHK_SET_ERR( error, "Failed to resize recv buffer" );
989 
990  if( done )
991  {
992  remoteOwnedBuffs[ind]->reset_ptr( sizeof( int ) );
993 
994  int from_proc = status.MPI_SOURCE;
995  int idx = std::find( send_procs.begin(), send_procs.end(), from_proc ) - send_procs.begin();
996 
997  int msg = msgsizes[idx].size();
998  std::vector< int > recvmsg( msg );
999  int ndata = senddata[idx].size();
1000  std::vector< EntityHandle > dum_vec( ndata );
1001 
1002  UNPACK_INTS( remoteOwnedBuffs[ind]->buff_ptr, &recvmsg[0], msg );
1003  UNPACK_EH( remoteOwnedBuffs[ind]->buff_ptr, &dum_vec[0], ndata );
1004 
1005  recvdata[idx].insert( recvdata[idx].end(), dum_vec.begin(), dum_vec.end() );
1006  }
1007  }
1008 
1009 #ifdef USE_MPE
1010  if( myDebug->get_verbosity() == 2 )
1011  {
1012  MPE_Log_event( ENTITIES_END, procConfig.proc_rank(), "Ending send_recv_entities." );
1013  }
1014 #endif
1015 
1016  return MB_SUCCESS;
1017 }

References buffProcs, entities, moab::error(), ErrorCode, get_buffers(), moab::DebugOutput::get_verbosity(), INITIAL_BUFF_SIZE, localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_ENTS_SIZE, MB_SET_ERR, MB_SUCCESS, MPE_Log_event, moab::msgs, myDebug, moab::PACK_EH(), moab::PACK_INTS(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, recv_buffer(), remoteOwnedBuffs, reset_all_buffers(), send_buffer(), sendReqs, size(), moab::DebugOutput::tprintf(), moab::UNPACK_EH(), and moab::UNPACK_INTS().

◆ set_debug_verbosity()

void moab::ParallelComm::set_debug_verbosity ( int  verb)

set the verbosity level of output from this pcomm

Definition at line 8867 of file ParallelComm.cpp.

8868 {
8869  myDebug->set_verbosity( verb );
8870 }

References myDebug, and moab::DebugOutput::set_verbosity().

Referenced by moab::ReadParallel::load_file().

◆ set_partitioning()

ErrorCode moab::ParallelComm::set_partitioning ( EntityHandle  h)

Definition at line 8075 of file ParallelComm.cpp.

8076 {
8077  ErrorCode rval;
8078  Tag prtn_tag;
8081  if( MB_SUCCESS != rval ) return rval;
8082 
8083  // Get my id
8084  ParallelComm* pcomm_arr[MAX_SHARING_PROCS];
8085  Tag pc_tag = pcomm_tag( mbImpl, false );
8086  if( 0 == pc_tag ) return MB_FAILURE;
8087  const EntityHandle root = 0;
8088  ErrorCode result = mbImpl->tag_get_data( pc_tag, &root, 1, pcomm_arr );
8089  if( MB_SUCCESS != result ) return MB_FAILURE;
8090  int id = std::find( pcomm_arr, pcomm_arr + MAX_SHARING_PROCS, this ) - pcomm_arr;
8091  if( id == MAX_SHARING_PROCS ) return MB_FAILURE;
8092 
8094  if( old )
8095  {
8096  rval = mbImpl->tag_delete_data( prtn_tag, &old, 1 );
8097  if( MB_SUCCESS != rval ) return rval;
8098  partitioningSet = 0;
8099  }
8100 
8101  if( !set ) return MB_SUCCESS;
8102 
8103  Range contents;
8104  if( old )
8105  {
8106  rval = mbImpl->get_entities_by_handle( old, contents );
8107  if( MB_SUCCESS != rval ) return rval;
8108  }
8109  else
8110  {
8111  contents = partition_sets();
8112  }
8113 
8114  rval = mbImpl->add_entities( set, contents );
8115  if( MB_SUCCESS != rval ) return rval;
8116 
8117  // Store pcomm id on new partition set
8118  rval = mbImpl->tag_set_data( prtn_tag, &set, 1, &id );
8119  if( MB_SUCCESS != rval ) return rval;
8120 
8121  partitioningSet = set;
8122  return MB_SUCCESS;
8123 }

References moab::Interface::add_entities(), ErrorCode, moab::Interface::get_entities_by_handle(), MAX_SHARING_PROCS, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_SPARSE, MB_TYPE_INTEGER, mbImpl, partition_sets(), moab::PARTITIONING_PCOMM_TAG_NAME, partitioningSet, pcomm_tag(), moab::Interface::tag_delete_data(), moab::Interface::tag_get_data(), moab::Interface::tag_get_handle(), and moab::Interface::tag_set_data().

Referenced by get_pcomm().

◆ set_pstatus_entities() [1/2]

ErrorCode moab::ParallelComm::set_pstatus_entities ( EntityHandle pstatus_ents,
int  num_ents,
unsigned char  pstatus_val,
bool  lower_dim_ents = false,
bool  verts_too = true,
int  operation = Interface::UNION 
)
private

Set pstatus values on entities (vector-based function)

Parameters
pstatus_entsEntities to be set
pstatus_valPstatus value to be set
lower_dim_entsIf true, lower-dimensional ents (incl. vertices) set too (and created if they don't exist)
verts_tooIf true, vertices also set
operationIf UNION, pstatus_val is OR-d with existing value, otherwise existing value is over-written

Definition at line 4445 of file ParallelComm.cpp.

4451 {
4452  std::vector< unsigned char > pstatus_vals( num_ents );
4453  ErrorCode result;
4454  if( lower_dim_ents || verts_too )
4455  {
4456  // In this case, call the range-based version
4457  Range tmp_range;
4458  std::copy( pstatus_ents, pstatus_ents + num_ents, range_inserter( tmp_range ) );
4459  return set_pstatus_entities( tmp_range, pstatus_val, lower_dim_ents, verts_too, operation );
4460  }
4461 
4462  if( Interface::UNION == operation )
4463  {
4464  result = mbImpl->tag_get_data( pstatus_tag(), pstatus_ents, num_ents, &pstatus_vals[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" );
4465  for( unsigned int i = 0; i < (unsigned int)num_ents; i++ )
4466  pstatus_vals[i] |= pstatus_val;
4467  }
4468  else
4469  {
4470  for( unsigned int i = 0; i < (unsigned int)num_ents; i++ )
4471  pstatus_vals[i] = pstatus_val;
4472  }
4473  result = mbImpl->tag_set_data( pstatus_tag(), pstatus_ents, num_ents, &pstatus_vals[0] );MB_CHK_SET_ERR( result, "Failed to set pstatus tag data" );
4474 
4475  return MB_SUCCESS;
4476 }

References ErrorCode, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, pstatus_tag(), set_pstatus_entities(), moab::Interface::tag_get_data(), moab::Interface::tag_set_data(), and moab::Interface::UNION.

◆ set_pstatus_entities() [2/2]

ErrorCode moab::ParallelComm::set_pstatus_entities ( Range pstatus_ents,
unsigned char  pstatus_val,
bool  lower_dim_ents = false,
bool  verts_too = true,
int  operation = Interface::UNION 
)
private

Set pstatus values on entities.

Parameters
pstatus_entsEntities to be set
pstatus_valPstatus value to be set
lower_dim_entsIf true, lower-dimensional ents (incl. vertices) set too (and created if they don't exist)
verts_tooIf true, vertices also set
operationIf UNION, pstatus_val is OR-d with existing value, otherwise existing value is over-written

Definition at line 4410 of file ParallelComm.cpp.

4415 {
4416  std::vector< unsigned char > pstatus_vals( pstatus_ents.size() );
4417  Range all_ents, *range_ptr = &pstatus_ents;
4418  ErrorCode result;
4419  if( lower_dim_ents || verts_too )
4420  {
4421  all_ents = pstatus_ents;
4422  range_ptr = &all_ents;
4423  int start_dim = ( lower_dim_ents ? mbImpl->dimension_from_handle( *pstatus_ents.rbegin() ) - 1 : 0 );
4424  for( ; start_dim >= 0; start_dim-- )
4425  {
4426  result = mbImpl->get_adjacencies( all_ents, start_dim, true, all_ents, Interface::UNION );MB_CHK_SET_ERR( result, "Failed to get adjacencies for pstatus entities" );
4427  }
4428  }
4429  if( Interface::UNION == operation )
4430  {
4431  result = mbImpl->tag_get_data( pstatus_tag(), *range_ptr, &pstatus_vals[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus tag data" );
4432  for( unsigned int i = 0; i < pstatus_vals.size(); i++ )
4433  pstatus_vals[i] |= pstatus_val;
4434  }
4435  else
4436  {
4437  for( unsigned int i = 0; i < pstatus_vals.size(); i++ )
4438  pstatus_vals[i] = pstatus_val;
4439  }
4440  result = mbImpl->tag_set_data( pstatus_tag(), *range_ptr, &pstatus_vals[0] );MB_CHK_SET_ERR( result, "Failed to set pstatus tag data" );
4441 
4442  return MB_SUCCESS;
4443 }

References moab::Interface::dimension_from_handle(), ErrorCode, moab::Interface::get_adjacencies(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, pstatus_tag(), moab::Range::rbegin(), moab::Range::size(), moab::Interface::tag_get_data(), moab::Interface::tag_set_data(), and moab::Interface::UNION.

Referenced by set_pstatus_entities().

◆ set_rank()

void moab::ParallelComm::set_rank ( unsigned int  r)
inline

set rank for this pcomm; USED FOR TESTING ONLY!

Definition at line 1662 of file ParallelComm.hpp.

1663 {
1664  procConfig.proc_rank( r );
1665  if( procConfig.proc_size() < r ) procConfig.proc_size( r + 1 );
1666 }

References moab::ProcConfig::proc_rank(), moab::ProcConfig::proc_size(), and procConfig.

◆ set_recv_request()

void moab::ParallelComm::set_recv_request ( int  n_request)
inline

Definition at line 1702 of file ParallelComm.hpp.

1703 {
1704  recvReqs.resize( n_request, MPI_REQUEST_NULL );
1705 }

References recvReqs.

◆ set_send_request()

void moab::ParallelComm::set_send_request ( int  n_request)
inline

Definition at line 1697 of file ParallelComm.hpp.

1698 {
1699  sendReqs.resize( n_request, MPI_REQUEST_NULL );
1700 }

References sendReqs.

◆ set_sharing_data()

ErrorCode moab::ParallelComm::set_sharing_data ( EntityHandle  ent,
unsigned char  pstatus,
int  old_nump,
int  new_nump,
int *  ps,
EntityHandle hs 
)
private

Definition at line 6422 of file ParallelComm.cpp.

6428 {
6429  // If new nump is less than 3, the entity is no longer mutishared
6430  if( old_nump > 2 && ( pstatus & PSTATUS_MULTISHARED ) && new_nump < 3 )
6431  {
6432  // Unset multishared flag
6433  pstatus ^= PSTATUS_MULTISHARED;
6434  }
6435 
6436  // Check for consistency in input data
6437  // DBG
6438  /* bool con1 = ((new_nump == 2 && pstatus&PSTATUS_SHARED && !(pstatus&PSTATUS_MULTISHARED)) ||
6439  (new_nump > 2 && pstatus&PSTATUS_SHARED && pstatus&PSTATUS_MULTISHARED)); bool con2 =
6440  (!(pstatus&PSTATUS_GHOST) || pstatus&PSTATUS_SHARED); bool con3 = (new_nump < 3 ||
6441  (pstatus&PSTATUS_NOT_OWNED && ps[0] != (int)rank()) || (!(pstatus&PSTATUS_NOT_OWNED) && ps[0]
6442  == (int)rank())); std::cout<<"current rank = "<<rank()<<std::endl; std::cout<<"condition
6443  1::"<<con1<<std::endl; std::cout<<"condition 2::"<<con2<<std::endl; std::cout<<"condition
6444  3::"<<con3<<std::endl;*/
6445 
6446  // DBG
6447 
6448  assert( new_nump > 1 &&
6449  ( ( new_nump == 2 && pstatus & PSTATUS_SHARED &&
6450  !( pstatus & PSTATUS_MULTISHARED ) ) || // If <= 2 must not be multishared
6451  ( new_nump > 2 && pstatus & PSTATUS_SHARED &&
6452  pstatus & PSTATUS_MULTISHARED ) ) && // If > 2 procs, must be multishared
6453  ( !( pstatus & PSTATUS_GHOST ) || pstatus & PSTATUS_SHARED ) && // If ghost, it must also be shared
6454  ( new_nump < 3 ||
6455  ( pstatus & PSTATUS_NOT_OWNED && ps[0] != (int)rank() ) || // I'm not owner and first proc not me
6456  ( !( pstatus & PSTATUS_NOT_OWNED ) && ps[0] == (int)rank() ) ) // I'm owner and first proc is me
6457  );
6458 
6459 #ifndef NDEBUG
6460  {
6461  // Check for duplicates in proc list
6462  std::set< unsigned int > dumprocs;
6463  int dp = 0;
6464  for( ; dp < old_nump && -1 != ps[dp]; dp++ )
6465  dumprocs.insert( ps[dp] );
6466  assert( dp == (int)dumprocs.size() );
6467  }
6468 #endif
6469 
6470  ErrorCode result;
6471  // Reset any old data that needs to be
6472  if( old_nump > 2 && new_nump < 3 )
6473  {
6474  // Need to remove multishared tags
6475  result = mbImpl->tag_delete_data( sharedps_tag(), &ent, 1 );MB_CHK_SET_ERR( result, "set_sharing_data:1" );
6476  result = mbImpl->tag_delete_data( sharedhs_tag(), &ent, 1 );MB_CHK_SET_ERR( result, "set_sharing_data:2" );
6477  // if (new_nump < 2)
6478  // pstatus = 0x0;
6479  // else if (ps[0] != (int)proc_config().proc_rank())
6480  // pstatus |= PSTATUS_NOT_OWNED;
6481  }
6482  else if( ( old_nump < 3 && new_nump > 2 ) || ( old_nump > 1 && new_nump == 1 ) )
6483  {
6484  // Reset sharedp and sharedh tags
6485  int tmp_p = -1;
6486  EntityHandle tmp_h = 0;
6487  result = mbImpl->tag_set_data( sharedp_tag(), &ent, 1, &tmp_p );MB_CHK_SET_ERR( result, "set_sharing_data:3" );
6488  result = mbImpl->tag_set_data( sharedh_tag(), &ent, 1, &tmp_h );MB_CHK_SET_ERR( result, "set_sharing_data:4" );
6489  }
6490 
6491  assert( "check for multishared/owner I'm first proc" &&
6492  ( !( pstatus & PSTATUS_MULTISHARED ) || ( pstatus & ( PSTATUS_NOT_OWNED | PSTATUS_GHOST ) ) ||
6493  ( ps[0] == (int)rank() ) ) &&
6494  "interface entities should have > 1 proc" && ( !( pstatus & PSTATUS_INTERFACE ) || new_nump > 1 ) &&
6495  "ghost entities should have > 1 proc" && ( !( pstatus & PSTATUS_GHOST ) || new_nump > 1 ) );
6496 
6497  // Now set new data
6498  if( new_nump > 2 )
6499  {
6500  result = mbImpl->tag_set_data( sharedps_tag(), &ent, 1, ps );MB_CHK_SET_ERR( result, "set_sharing_data:5" );
6501  result = mbImpl->tag_set_data( sharedhs_tag(), &ent, 1, hs );MB_CHK_SET_ERR( result, "set_sharing_data:6" );
6502  }
6503  else
6504  {
6505  unsigned int j = ( ps[0] == (int)procConfig.proc_rank() ? 1 : 0 );
6506  assert( -1 != ps[j] );
6507  result = mbImpl->tag_set_data( sharedp_tag(), &ent, 1, ps + j );MB_CHK_SET_ERR( result, "set_sharing_data:7" );
6508  result = mbImpl->tag_set_data( sharedh_tag(), &ent, 1, hs + j );MB_CHK_SET_ERR( result, "set_sharing_data:8" );
6509  }
6510 
6511  result = mbImpl->tag_set_data( pstatus_tag(), &ent, 1, &pstatus );MB_CHK_SET_ERR( result, "set_sharing_data:9" );
6512 
6513  if( old_nump > 1 && new_nump < 2 ) sharedEnts.erase( ent );
6514 
6515  return result;
6516 }

References ErrorCode, MB_CHK_SET_ERR, mbImpl, moab::ProcConfig::proc_rank(), procConfig, PSTATUS_GHOST, PSTATUS_INTERFACE, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, pstatus_tag(), rank(), sharedEnts, sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), moab::Interface::tag_delete_data(), and moab::Interface::tag_set_data().

Referenced by check_clean_iface(), and update_remote_data().

◆ set_size()

void moab::ParallelComm::set_size ( unsigned int  r)
inline

set rank for this pcomm; USED FOR TESTING ONLY!

Definition at line 1668 of file ParallelComm.hpp.

1669 {
1670  procConfig.proc_size( s );
1671 }

References moab::ProcConfig::proc_size(), and procConfig.

◆ settle_intersection_points()

ErrorCode moab::ParallelComm::settle_intersection_points ( Range edges,
Range shared_edges_owned,
std::vector< std::vector< EntityHandle > * > &  extraNodesVec,
double  tolerance 
)

Definition at line 9036 of file ParallelComm.cpp.

9040 {
9041  // The index of an edge in the edges Range will give the index for extraNodesVec
9042  // the strategy of this follows exchange tags strategy:
9043  ErrorCode result;
9044  int success;
9045 
9046  myDebug->tprintf( 1, "Entering settle_intersection_points\n" );
9047 
9048  // Get all procs interfacing to this proc
9049  std::set< unsigned int > exch_procs;
9050  result = get_comm_procs( exch_procs );
9051 
9052  // Post ghost irecv's for all interface procs
9053  // Index requests the same as buffer/sharing procs indices
9054  std::vector< MPI_Request > recv_intx_reqs( 3 * buffProcs.size(), MPI_REQUEST_NULL );
9055  std::vector< unsigned int >::iterator sit;
9056  int ind;
9057 
9059  int incoming = 0;
9060 
9061  for( ind = 0, sit = buffProcs.begin(); sit != buffProcs.end(); ++sit, ind++ )
9062  {
9063  incoming++;
9065  MB_MESG_TAGS_SIZE, incoming );
9066 
9067  success = MPI_Irecv( remoteOwnedBuffs[ind]->mem_ptr, INITIAL_BUFF_SIZE, MPI_UNSIGNED_CHAR, *sit,
9068  MB_MESG_TAGS_SIZE, procConfig.proc_comm(), &recv_intx_reqs[3 * ind] );
9069  if( success != MPI_SUCCESS )
9070  {
9071  MB_SET_ERR( MB_FAILURE, "Failed to post irecv in settle intersection point" );
9072  }
9073  }
9074 
9075  // Pack and send intersection points from this proc to others
9076  // Make sendReqs vector to simplify initialization
9077  sendReqs.resize( 3 * buffProcs.size(), MPI_REQUEST_NULL );
9078 
9079  // Take all shared entities if incoming list is empty
9080  Range& entities = shared_edges_owned;
9081 
9082  int dum_ack_buff;
9083 
9084  for( ind = 0, sit = buffProcs.begin(); sit != buffProcs.end(); ++sit, ind++ )
9085  {
9086  Range edges_to_send = entities;
9087 
9088  // Get ents shared by proc *sit
9089  result = filter_pstatus( edges_to_send, PSTATUS_SHARED, PSTATUS_AND, *sit );MB_CHK_SET_ERR( result, "Failed pstatus AND check" );
9090 
9091  // Remote nonowned entities; not needed, edges are already owned by this proc
9092 
9093  // Pack the data
9094  // Reserve space on front for size and for initial buff size
9095  Buffer* buff = localOwnedBuffs[ind];
9096  buff->reset_ptr( sizeof( int ) );
9097 
9098  /*result = pack_intx_points(edges_to_send, edges, extraNodesVec,
9099  localOwnedBuffs[ind], *sit);*/
9100 
9101  // Count first data, and see if it is enough room?
9102  // Send the remote handles
9103  std::vector< EntityHandle > dum_remote_edges( edges_to_send.size() );
9104  /*
9105  * get_remote_handles(const bool store_remote_handles,
9106  EntityHandle *from_vec,
9107  EntityHandle *to_vec_tmp,
9108  int num_ents, int to_proc,
9109  const std::vector<EntityHandle> &new_ents);
9110  */
9111  // We are sending count, num edges, remote edges handles, and then, for each edge:
9112  // -- nb intx points, 3*nbintPointsforEdge "doubles"
9113  std::vector< EntityHandle > dum_vec;
9114  result = get_remote_handles( true, edges_to_send, &dum_remote_edges[0], *sit, dum_vec );MB_CHK_SET_ERR( result, "Failed to get remote handles" );
9115  int count = 4; // Size of data
9116  count += sizeof( int ) * (int)edges_to_send.size();
9117  count += sizeof( EntityHandle ) * (int)edges_to_send.size(); // We will send the remote handles
9118  for( Range::iterator eit = edges_to_send.begin(); eit != edges_to_send.end(); ++eit )
9119  {
9120  EntityHandle edge = *eit;
9121  unsigned int indx = edges.find( edge ) - edges.begin();
9122  std::vector< EntityHandle >& intx_nodes = *( extraNodesVec[indx] );
9123  count += (int)intx_nodes.size() * 3 * sizeof( double ); // 3 integer for each entity handle
9124  }
9125  //
9126  buff->check_space( count );
9127  PACK_INT( buff->buff_ptr, edges_to_send.size() );
9128  PACK_EH( buff->buff_ptr, &dum_remote_edges[0], dum_remote_edges.size() );
9129  for( Range::iterator eit = edges_to_send.begin(); eit != edges_to_send.end(); ++eit )
9130  {
9131  EntityHandle edge = *eit;
9132  // Pack the remote edge
9133  unsigned int indx = edges.find( edge ) - edges.begin();
9134  std::vector< EntityHandle >& intx_nodes = *( extraNodesVec[indx] );
9135  PACK_INT( buff->buff_ptr, intx_nodes.size() );
9136 
9137  result = mbImpl->get_coords( &intx_nodes[0], intx_nodes.size(), (double*)buff->buff_ptr );MB_CHK_SET_ERR( result, "Failed to get coords" );
9138  buff->buff_ptr += 3 * sizeof( double ) * intx_nodes.size();
9139  }
9140 
9141  // Done packing the intx points and remote edges
9142  buff->set_stored_size();
9143 
9144  // Now send it
9145  result = send_buffer( *sit, localOwnedBuffs[ind], MB_MESG_TAGS_SIZE, sendReqs[3 * ind],
9146  recv_intx_reqs[3 * ind + 2], &dum_ack_buff, incoming );MB_CHK_SET_ERR( result, "Failed to send buffer" );
9147  }
9148 
9149  // Receive/unpack intx points
9150  while( incoming )
9151  {
9152  MPI_Status status;
9153  int index_in_recv_requests;
9155  success = MPI_Waitany( 3 * buffProcs.size(), &recv_intx_reqs[0], &index_in_recv_requests, &status );
9156  if( MPI_SUCCESS != success )
9157  {
9158  MB_SET_ERR( MB_FAILURE, "Failed in waitany in ghost exchange" );
9159  }
9160  // Processor index in the list is divided by 3
9161  ind = index_in_recv_requests / 3;
9162 
9163  PRINT_DEBUG_RECD( status );
9164 
9165  // OK, received something; decrement incoming counter
9166  incoming--;
9167 
9168  bool done = false;
9169  result = recv_buffer( MB_MESG_TAGS_SIZE, status, remoteOwnedBuffs[ind],
9170  recv_intx_reqs[3 * ind + 1], // This is for receiving the second message
9171  recv_intx_reqs[3 * ind + 2], // This would be for ack, but it is not
9172  // used; consider removing it
9173  incoming, localOwnedBuffs[ind],
9174  sendReqs[3 * ind + 1], // Send request for sending the second message
9175  sendReqs[3 * ind + 2], // This is for sending the ack
9176  done );MB_CHK_SET_ERR( result, "Failed to resize recv buffer" );
9177  if( done )
9178  {
9179  Buffer* buff = remoteOwnedBuffs[ind];
9180  buff->reset_ptr( sizeof( int ) );
9181  /*result = unpack_tags(remoteOwnedBuffs[ind/2]->buff_ptr, dum_vec, true,
9182  buffProcs[ind/2]);*/
9183  // Unpack now the edges and vertex info; compare with the existing vertex positions
9184 
9185  int num_edges;
9186 
9187  UNPACK_INT( buff->buff_ptr, num_edges );
9188  std::vector< EntityHandle > rec_edges;
9189  rec_edges.resize( num_edges );
9190  UNPACK_EH( buff->buff_ptr, &rec_edges[0], num_edges );
9191  for( int i = 0; i < num_edges; i++ )
9192  {
9193  EntityHandle edge = rec_edges[i];
9194  unsigned int indx = edges.find( edge ) - edges.begin();
9195  std::vector< EntityHandle >& intx_nodes = *( extraNodesVec[indx] );
9196  // Now get the number of nodes on this (now local) edge
9197  int nverts;
9198  UNPACK_INT( buff->buff_ptr, nverts );
9199  std::vector< double > pos_from_owner;
9200  pos_from_owner.resize( 3 * nverts );
9201  UNPACK_DBLS( buff->buff_ptr, &pos_from_owner[0], 3 * nverts );
9202  std::vector< double > current_positions( 3 * intx_nodes.size() );
9203  result = mbImpl->get_coords( &intx_nodes[0], intx_nodes.size(), &current_positions[0] );MB_CHK_SET_ERR( result, "Failed to get current positions" );
9204  // Now, look at what we have in current pos, compare to pos from owner, and reset
9205  for( int k = 0; k < (int)intx_nodes.size(); k++ )
9206  {
9207  double* pk = &current_positions[3 * k];
9208  // Take the current pos k, and settle among the ones from owner:
9209  bool found = false;
9210  for( int j = 0; j < nverts && !found; j++ )
9211  {
9212  double* pj = &pos_from_owner[3 * j];
9213  double dist2 = ( pk[0] - pj[0] ) * ( pk[0] - pj[0] ) + ( pk[1] - pj[1] ) * ( pk[1] - pj[1] ) +
9214  ( pk[2] - pj[2] ) * ( pk[2] - pj[2] );
9215  if( dist2 < tolerance )
9216  {
9217  pk[0] = pj[0];
9218  pk[1] = pj[1];
9219  pk[2] = pj[2]; // Correct it!
9220  found = true;
9221  break;
9222  }
9223  }
9224  if( !found )
9225  {
9226 #ifndef NDEBUG
9227  std::cout << " pk:" << pk[0] << " " << pk[1] << " " << pk[2] << " not found \n";
9228 #endif
9229  result = MB_FAILURE;
9230  }
9231  }
9232  // After we are done resetting, we can set the new positions of nodes:
9233  result = mbImpl->set_coords( &intx_nodes[0], (int)intx_nodes.size(), &current_positions[0] );MB_CHK_SET_ERR( result, "Failed to set new current positions" );
9234  }
9235  }
9236  }
9237 
9238  // OK, now wait
9239  if( myDebug->get_verbosity() == 5 )
9240  {
9241  success = MPI_Barrier( procConfig.proc_comm() );
9242  }
9243  else
9244  {
9245  MPI_Status status[3 * MAX_SHARING_PROCS];
9246  success = MPI_Waitall( 3 * buffProcs.size(), &sendReqs[0], status );
9247  }
9248  if( MPI_SUCCESS != success )
9249  {
9250  MB_SET_ERR( MB_FAILURE, "Failure in waitall in tag exchange" );
9251  }
9252 
9253  myDebug->tprintf( 1, "Exiting settle_intersection_points" );
9254 
9255  return MB_SUCCESS;
9256 }

References moab::Range::begin(), moab::ParallelComm::Buffer::buff_ptr, buffProcs, moab::ParallelComm::Buffer::check_space(), moab::Range::end(), entities, ErrorCode, filter_pstatus(), moab::Range::find(), get_comm_procs(), moab::Interface::get_coords(), get_remote_handles(), moab::DebugOutput::get_verbosity(), INITIAL_BUFF_SIZE, localOwnedBuffs, MAX_SHARING_PROCS, MB_CHK_SET_ERR, moab::MB_MESG_TAGS_SIZE, MB_SET_ERR, MB_SUCCESS, mbImpl, myDebug, moab::PACK_EH(), moab::PACK_INT(), PRINT_DEBUG_IRECV, PRINT_DEBUG_RECD, PRINT_DEBUG_WAITANY, moab::ProcConfig::proc_comm(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_AND, PSTATUS_SHARED, recv_buffer(), remoteOwnedBuffs, reset_all_buffers(), moab::ParallelComm::Buffer::reset_ptr(), send_buffer(), sendReqs, moab::Interface::set_coords(), moab::ParallelComm::Buffer::set_stored_size(), moab::Range::size(), moab::tolerance, moab::DebugOutput::tprintf(), moab::UNPACK_DBLS(), moab::UNPACK_EH(), and moab::UNPACK_INT().

◆ sharedh_tag()

◆ sharedhs_tag()

◆ sharedp_tag()

◆ sharedps_tag()

◆ size()

◆ tag_iface_entities()

ErrorCode moab::ParallelComm::tag_iface_entities ( )
private

Set pstatus tag interface bit on entities in sets passed in.

Definition at line 4360 of file ParallelComm.cpp.

4361 {
4362  ErrorCode result = MB_SUCCESS;
4363  Range iface_ents, tmp_ents, rmv_ents;
4364  std::vector< unsigned char > pstat;
4365  unsigned char set_pstat;
4366  Range::iterator rit2;
4367  unsigned int i;
4368 
4369  for( Range::iterator rit = interfaceSets.begin(); rit != interfaceSets.end(); ++rit )
4370  {
4371  iface_ents.clear();
4372 
4373  result = mbImpl->get_entities_by_handle( *rit, iface_ents );MB_CHK_SET_ERR( result, "Failed to get interface set contents" );
4374  pstat.resize( iface_ents.size() );
4375  result = mbImpl->tag_get_data( pstatus_tag(), iface_ents, &pstat[0] );MB_CHK_SET_ERR( result, "Failed to get pstatus values for interface set entities" );
4376  result = mbImpl->tag_get_data( pstatus_tag(), &( *rit ), 1, &set_pstat );MB_CHK_SET_ERR( result, "Failed to get pstatus values for interface set" );
4377  rmv_ents.clear();
4378  for( rit2 = iface_ents.begin(), i = 0; rit2 != iface_ents.end(); ++rit2, i++ )
4379  {
4380  if( !( pstat[i] & PSTATUS_INTERFACE ) )
4381  {
4382  rmv_ents.insert( *rit2 );
4383  pstat[i] = 0x0;
4384  }
4385  }
4386  result = mbImpl->remove_entities( *rit, rmv_ents );MB_CHK_SET_ERR( result, "Failed to remove entities from interface set" );
4387 
4388  if( !( set_pstat & PSTATUS_NOT_OWNED ) ) continue;
4389  // If we're here, we need to set the notowned status on (remaining) set contents
4390 
4391  // Remove rmv_ents from the contents list
4392  iface_ents = subtract( iface_ents, rmv_ents );
4393  // Compress the pstat vector (removing 0x0's)
4394  std::remove_if( pstat.begin(), pstat.end(),
4395  std::bind( std::equal_to< unsigned char >(), std::placeholders::_1, 0x0 ) );
4396  // std::bind2nd(std::equal_to<unsigned char>(), 0x0));
4397  // https://stackoverflow.com/questions/32739018/a-replacement-for-stdbind2nd
4398  // Fold the not_owned bit into remaining values
4399  unsigned int sz = iface_ents.size();
4400  for( i = 0; i < sz; i++ )
4401  pstat[i] |= PSTATUS_NOT_OWNED;
4402 
4403  // Set the tag on the entities
4404  result = mbImpl->tag_set_data( pstatus_tag(), iface_ents, &pstat[0] );MB_CHK_SET_ERR( result, "Failed to set pstatus values for interface set entities" );
4405  }
4406 
4407  return MB_SUCCESS;
4408 }

References moab::Range::begin(), moab::Range::clear(), moab::Range::end(), ErrorCode, moab::Interface::get_entities_by_handle(), moab::Range::insert(), interfaceSets, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, PSTATUS_INTERFACE, PSTATUS_NOT_OWNED, pstatus_tag(), moab::Interface::remove_entities(), moab::Range::size(), moab::subtract(), moab::Interface::tag_get_data(), and moab::Interface::tag_set_data().

Referenced by exchange_ghost_cells().

◆ tag_shared_verts() [1/2]

ErrorCode moab::ParallelComm::tag_shared_verts ( TupleList shared_ents,
std::map< std::vector< int >, std::vector< EntityHandle > > &  proc_nvecs,
Range proc_verts,
unsigned int  i_extra = 1 
)

Definition at line 5219 of file ParallelComm.cpp.

5223 {
5224  Tag shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag;
5225  ErrorCode result = get_shared_proc_tags( shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag );MB_CHK_SET_ERR( result, "Failed to get shared proc tags in tag_shared_verts" );
5226 
5227  unsigned int j = 0, i = 0;
5228  std::vector< int > sharing_procs, sharing_procs2, tag_procs;
5229  std::vector< EntityHandle > sharing_handles, sharing_handles2, tag_lhandles, tag_rhandles;
5230  std::vector< unsigned char > pstatus;
5231 
5232  // Were on tuple j/2
5233  if( i_extra ) i += i_extra;
5234  while( j < 2 * shared_ents.get_n() )
5235  {
5236  // Count & accumulate sharing procs
5237  EntityHandle this_ent = shared_ents.vul_rd[j], other_ent = 0;
5238  int other_proc = -1;
5239  while( j < 2 * shared_ents.get_n() && shared_ents.vul_rd[j] == this_ent )
5240  {
5241  j++;
5242  // Shouldn't have same proc
5243  assert( shared_ents.vi_rd[i] != (int)procConfig.proc_rank() );
5244  // Grab the remote data if its not a dublicate
5245  if( shared_ents.vul_rd[j] != other_ent || shared_ents.vi_rd[i] != other_proc )
5246  {
5247  assert( 0 != shared_ents.vul_rd[j] );
5248  sharing_procs.push_back( shared_ents.vi_rd[i] );
5249  sharing_handles.push_back( shared_ents.vul_rd[j] );
5250  }
5251  other_proc = shared_ents.vi_rd[i];
5252  other_ent = shared_ents.vul_rd[j];
5253  j++;
5254  i += 1 + i_extra;
5255  }
5256 
5257  if( sharing_procs.size() > 1 )
5258  {
5259  // Add current proc/handle to list
5260  sharing_procs.push_back( procConfig.proc_rank() );
5261  sharing_handles.push_back( this_ent );
5262 
5263  // Sort sharing_procs and sharing_handles such that
5264  // sharing_procs is in ascending order. Use temporary
5265  // lists and binary search to re-order sharing_handles.
5266  sharing_procs2 = sharing_procs;
5267  std::sort( sharing_procs2.begin(), sharing_procs2.end() );
5268  sharing_handles2.resize( sharing_handles.size() );
5269  for( size_t k = 0; k < sharing_handles.size(); k++ )
5270  {
5271  size_t idx = std::lower_bound( sharing_procs2.begin(), sharing_procs2.end(), sharing_procs[k] ) -
5272  sharing_procs2.begin();
5273  sharing_handles2[idx] = sharing_handles[k];
5274  }
5275  sharing_procs.swap( sharing_procs2 );
5276  sharing_handles.swap( sharing_handles2 );
5277  }
5278 
5279  assert( sharing_procs.size() != 2 );
5280  proc_nvecs[sharing_procs].push_back( this_ent );
5281 
5282  unsigned char share_flag = PSTATUS_SHARED, ms_flag = ( PSTATUS_SHARED | PSTATUS_MULTISHARED );
5283  if( sharing_procs.size() == 1 )
5284  {
5285  tag_procs.push_back( sharing_procs[0] );
5286  tag_lhandles.push_back( this_ent );
5287  tag_rhandles.push_back( sharing_handles[0] );
5288  pstatus.push_back( share_flag );
5289  }
5290  else
5291  {
5292  // Pad lists
5293  // assert(sharing_procs.size() <= MAX_SHARING_PROCS);
5294  if( sharing_procs.size() > MAX_SHARING_PROCS )
5295  {
5296  std::cerr << "MAX_SHARING_PROCS exceeded for vertex " << this_ent << " on process "
5297  << proc_config().proc_rank() << std::endl;
5298  std::cerr.flush();
5299  MPI_Abort( proc_config().proc_comm(), 66 );
5300  }
5301  sharing_procs.resize( MAX_SHARING_PROCS, -1 );
5302  sharing_handles.resize( MAX_SHARING_PROCS, 0 );
5303  result = mbImpl->tag_set_data( shps_tag, &this_ent, 1, &sharing_procs[0] );MB_CHK_SET_ERR( result, "Failed to set sharedps tag on shared vertex" );
5304  result = mbImpl->tag_set_data( shhs_tag, &this_ent, 1, &sharing_handles[0] );MB_CHK_SET_ERR( result, "Failed to set sharedhs tag on shared vertex" );
5305  result = mbImpl->tag_set_data( pstat_tag, &this_ent, 1, &ms_flag );MB_CHK_SET_ERR( result, "Failed to set pstatus tag on shared vertex" );
5306  sharedEnts.insert( this_ent );
5307  }
5308 
5309  // Reset sharing proc(s) tags
5310  sharing_procs.clear();
5311  sharing_handles.clear();
5312  }
5313 
5314  if( !tag_procs.empty() )
5315  {
5316  result = mbImpl->tag_set_data( shp_tag, &tag_lhandles[0], tag_procs.size(), &tag_procs[0] );MB_CHK_SET_ERR( result, "Failed to set sharedp tag on shared vertex" );
5317  result = mbImpl->tag_set_data( shh_tag, &tag_lhandles[0], tag_procs.size(), &tag_rhandles[0] );MB_CHK_SET_ERR( result, "Failed to set sharedh tag on shared vertex" );
5318  result = mbImpl->tag_set_data( pstat_tag, &tag_lhandles[0], tag_procs.size(), &pstatus[0] );MB_CHK_SET_ERR( result, "Failed to set pstatus tag on shared vertex" );
5319  for( std::vector< EntityHandle >::iterator vvt = tag_lhandles.begin(); vvt != tag_lhandles.end(); vvt++ )
5320  sharedEnts.insert( *vvt );
5321  }
5322 
5323 #ifndef NDEBUG
5324  // Shouldn't be any repeated entities in any of the vectors in proc_nvecs
5325  for( std::map< std::vector< int >, std::vector< EntityHandle > >::iterator mit = proc_nvecs.begin();
5326  mit != proc_nvecs.end(); ++mit )
5327  {
5328  std::vector< EntityHandle > tmp_vec = ( mit->second );
5329  std::sort( tmp_vec.begin(), tmp_vec.end() );
5330  std::vector< EntityHandle >::iterator vit = std::unique( tmp_vec.begin(), tmp_vec.end() );
5331  assert( vit == tmp_vec.end() );
5332  }
5333 #endif
5334 
5335  return MB_SUCCESS;
5336 }

References ErrorCode, moab::TupleList::get_n(), get_shared_proc_tags(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_MULTISHARED, PSTATUS_SHARED, sharedEnts, moab::Interface::tag_set_data(), moab::TupleList::vi_rd, and moab::TupleList::vul_rd.

Referenced by resolve_shared_ents(), moab::ScdInterface::tag_shared_vertices(), and moab::ParallelMergeMesh::TagSharedElements().

◆ tag_shared_verts() [2/2]

ErrorCode moab::ParallelComm::tag_shared_verts ( TupleList shared_verts,
Range skin_ents,
std::map< std::vector< int >, std::vector< EntityHandle > > &  proc_nvecs,
Range proc_verts 
)
private

Definition at line 5338 of file ParallelComm.cpp.

5342 {
5343  Tag shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag;
5344  ErrorCode result = get_shared_proc_tags( shp_tag, shps_tag, shh_tag, shhs_tag, pstat_tag );MB_CHK_SET_ERR( result, "Failed to get shared proc tags in tag_shared_verts" );
5345 
5346  unsigned int j = 0, i = 0;
5347  std::vector< int > sharing_procs, sharing_procs2;
5348  std::vector< EntityHandle > sharing_handles, sharing_handles2, skin_verts( skin_ents[0].size() );
5349  for( Range::iterator rit = skin_ents[0].begin(); rit != skin_ents[0].end(); ++rit, i++ )
5350  skin_verts[i] = *rit;
5351  i = 0;
5352 
5353  while( j < 2 * shared_ents.get_n() )
5354  {
5355  // Count & accumulate sharing procs
5356  int this_idx = shared_ents.vi_rd[j];
5357  EntityHandle this_ent = skin_verts[this_idx];
5358  while( j < 2 * shared_ents.get_n() && shared_ents.vi_rd[j] == this_idx )
5359  {
5360  j++;
5361  // Shouldn't have same proc
5362  assert( shared_ents.vi_rd[j] != (int)procConfig.proc_rank() );
5363  sharing_procs.push_back( shared_ents.vi_rd[j++] );
5364  sharing_handles.push_back( shared_ents.vul_rd[i++] );
5365  }
5366 
5367  if( sharing_procs.size() > 1 )
5368  {
5369  // Add current proc/handle to list
5370  sharing_procs.push_back( procConfig.proc_rank() );
5371  sharing_handles.push_back( this_ent );
5372  }
5373 
5374  // Sort sharing_procs and sharing_handles such that
5375  // sharing_procs is in ascending order. Use temporary
5376  // lists and binary search to re-order sharing_handles.
5377  sharing_procs2 = sharing_procs;
5378  std::sort( sharing_procs2.begin(), sharing_procs2.end() );
5379  sharing_handles2.resize( sharing_handles.size() );
5380  for( size_t k = 0; k < sharing_handles.size(); k++ )
5381  {
5382  size_t idx = std::lower_bound( sharing_procs2.begin(), sharing_procs2.end(), sharing_procs[k] ) -
5383  sharing_procs2.begin();
5384  sharing_handles2[idx] = sharing_handles[k];
5385  }
5386  sharing_procs.swap( sharing_procs2 );
5387  sharing_handles.swap( sharing_handles2 );
5388 
5389  assert( sharing_procs.size() != 2 );
5390  proc_nvecs[sharing_procs].push_back( this_ent );
5391 
5392  unsigned char share_flag = PSTATUS_SHARED, ms_flag = ( PSTATUS_SHARED | PSTATUS_MULTISHARED );
5393  if( sharing_procs.size() == 1 )
5394  {
5395  result = mbImpl->tag_set_data( shp_tag, &this_ent, 1, &sharing_procs[0] );MB_CHK_SET_ERR( result, "Failed to set sharedp tag on shared vertex" );
5396  result = mbImpl->tag_set_data( shh_tag, &this_ent, 1, &sharing_handles[0] );MB_CHK_SET_ERR( result, "Failed to set sharedh tag on shared vertex" );
5397  result = mbImpl->tag_set_data( pstat_tag, &this_ent, 1, &share_flag );MB_CHK_SET_ERR( result, "Failed to set pstatus tag on shared vertex" );
5398  sharedEnts.insert( this_ent );
5399  }
5400  else
5401  {
5402  // Pad lists
5403  // assert(sharing_procs.size() <= MAX_SHARING_PROCS);
5404  if( sharing_procs.size() > MAX_SHARING_PROCS )
5405  {
5406  std::cerr << "MAX_SHARING_PROCS exceeded for vertex " << this_ent << " on process "
5407  << proc_config().proc_rank() << std::endl;
5408  std::cerr.flush();
5409  MPI_Abort( proc_config().proc_comm(), 66 );
5410  }
5411  sharing_procs.resize( MAX_SHARING_PROCS, -1 );
5412  sharing_handles.resize( MAX_SHARING_PROCS, 0 );
5413  result = mbImpl->tag_set_data( shps_tag, &this_ent, 1, &sharing_procs[0] );MB_CHK_SET_ERR( result, "Failed to set sharedps tag on shared vertex" );
5414  result = mbImpl->tag_set_data( shhs_tag, &this_ent, 1, &sharing_handles[0] );MB_CHK_SET_ERR( result, "Failed to set sharedhs tag on shared vertex" );
5415  result = mbImpl->tag_set_data( pstat_tag, &this_ent, 1, &ms_flag );MB_CHK_SET_ERR( result, "Failed to set pstatus tag on shared vertex" );
5416  sharedEnts.insert( this_ent );
5417  }
5418 
5419  // Reset sharing proc(s) tags
5420  sharing_procs.clear();
5421  sharing_handles.clear();
5422  }
5423 
5424 #ifndef NDEBUG
5425  // Shouldn't be any repeated entities in any of the vectors in proc_nvecs
5426  for( std::map< std::vector< int >, std::vector< EntityHandle > >::iterator mit = proc_nvecs.begin();
5427  mit != proc_nvecs.end(); ++mit )
5428  {
5429  std::vector< EntityHandle > tmp_vec = ( mit->second );
5430  std::sort( tmp_vec.begin(), tmp_vec.end() );
5431  std::vector< EntityHandle >::iterator vit = std::unique( tmp_vec.begin(), tmp_vec.end() );
5432  assert( vit == tmp_vec.end() );
5433  }
5434 #endif
5435 
5436  return MB_SUCCESS;
5437 }

References moab::Range::end(), ErrorCode, moab::TupleList::get_n(), get_shared_proc_tags(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_MULTISHARED, PSTATUS_SHARED, sharedEnts, size(), moab::Interface::tag_set_data(), moab::TupleList::vi_rd, and moab::TupleList::vul_rd.

◆ unpack_adjacencies()

ErrorCode moab::ParallelComm::unpack_adjacencies ( unsigned char *&  buff_ptr,
Range entities,
const bool  store_handles,
const int  from_proc 
)
private

Definition at line 3462 of file ParallelComm.cpp.

3466 {
3467  return MB_FAILURE;
3468 }

◆ unpack_buffer()

ErrorCode moab::ParallelComm::unpack_buffer ( unsigned char *  buff_ptr,
const bool  store_remote_handles,
const int  from_proc,
const int  ind,
std::vector< std::vector< EntityHandle > > &  L1hloc,
std::vector< std::vector< EntityHandle > > &  L1hrem,
std::vector< std::vector< int > > &  L1p,
std::vector< EntityHandle > &  L2hloc,
std::vector< EntityHandle > &  L2hrem,
std::vector< unsigned int > &  L2p,
std::vector< EntityHandle > &  new_ents,
const bool  created_iface = false 
)

Definition at line 1463 of file ParallelComm.cpp.

1475 {
1476  unsigned char* tmp_buff = buff_ptr;
1477  ErrorCode result;
1478  result = unpack_entities( buff_ptr, store_remote_handles, ind, false, L1hloc, L1hrem, L1p, L2hloc, L2hrem, L2p,
1479  new_ents, created_iface );MB_CHK_SET_ERR( result, "Unpacking entities failed" );
1480  if( myDebug->get_verbosity() == 3 )
1481  {
1482  myDebug->tprintf( 4, "unpack_entities buffer space: %ld bytes.\n", (long int)( buff_ptr - tmp_buff ) );
1483  tmp_buff = buff_ptr;
1484  }
1485  result = unpack_sets( buff_ptr, new_ents, store_remote_handles, from_proc );MB_CHK_SET_ERR( result, "Unpacking sets failed" );
1486  if( myDebug->get_verbosity() == 3 )
1487  {
1488  myDebug->tprintf( 4, "unpack_sets buffer space: %ld bytes.\n", (long int)( buff_ptr - tmp_buff ) );
1489  tmp_buff = buff_ptr;
1490  }
1491  result = unpack_tags( buff_ptr, new_ents, store_remote_handles, from_proc );MB_CHK_SET_ERR( result, "Unpacking tags failed" );
1492  if( myDebug->get_verbosity() == 3 )
1493  {
1494  myDebug->tprintf( 4, "unpack_tags buffer space: %ld bytes.\n", (long int)( buff_ptr - tmp_buff ) );
1495  // tmp_buff = buff_ptr;
1496  }
1497 
1498  if( myDebug->get_verbosity() == 3 ) myDebug->print( 4, "\n" );
1499 
1500  return MB_SUCCESS;
1501 }

References ErrorCode, moab::DebugOutput::get_verbosity(), MB_CHK_SET_ERR, MB_SUCCESS, myDebug, moab::DebugOutput::print(), moab::DebugOutput::tprintf(), unpack_entities(), unpack_sets(), and unpack_tags().

Referenced by broadcast_entities(), exchange_owned_mesh(), moab::ParCommGraph::receive_mesh(), recv_entities(), recv_messages(), and scatter_entities().

◆ unpack_entities()

ErrorCode moab::ParallelComm::unpack_entities ( unsigned char *&  buff_ptr,
const bool  store_remote_handles,
const int  from_ind,
const bool  is_iface,
std::vector< std::vector< EntityHandle > > &  L1hloc,
std::vector< std::vector< EntityHandle > > &  L1hrem,
std::vector< std::vector< int > > &  L1p,
std::vector< EntityHandle > &  L2hloc,
std::vector< EntityHandle > &  L2hrem,
std::vector< unsigned int > &  L2p,
std::vector< EntityHandle > &  new_ents,
const bool  created_iface = false 
)

unpack entities in buff_ptr

Definition at line 2068 of file ParallelComm.cpp.

2080 {
2081  // General algorithm:
2082  // - unpack # entities
2083  // - save start of remote handle info, then scan forward to entity definition data
2084  // - for all vertices or entities w/ same # verts:
2085  // . get entity type, num ents, and (if !vert) # verts
2086  // . for each ent:
2087  // o get # procs/handles in remote handle info
2088  // o if # procs/handles > 2, check for already-created entity:
2089  // x get index of owner proc (1st in proc list), resize L1 list if nec
2090  // x look for already-arrived entity in L2 by owner handle
2091  // o if no existing entity:
2092  // x if iface, look for existing entity with same connect & type
2093  // x if none found, create vertex or element
2094  // x if !iface & multi-shared, save on L2
2095  // x if !iface, put new entity on new_ents list
2096  // o update proc/handle, pstatus tags, adjusting to put owner first if iface
2097  // o if !iface, save new handle on L1 for all sharing procs
2098 
2099  // Lists of handles/procs to return to sending/other procs
2100  // L1hloc[p], L1hrem[p]: handle pairs [h, h'], where h is the local proc handle
2101  // and h' is either the remote proc handle (if that is known) or
2102  // the owner proc handle (otherwise);
2103  // L1p[p]: indicates whether h is remote handle (= -1) or owner (rank of owner)
2104  // L2hloc, L2hrem: local/remote handles for entities shared by > 2 procs;
2105  // remote handles are on owning proc
2106  // L2p: owning procs for handles in L2hrem
2107 
2108  ErrorCode result;
2109  bool done = false;
2110  ReadUtilIface* ru = NULL;
2111 
2112  result = mbImpl->query_interface( ru );MB_CHK_SET_ERR( result, "Failed to get ReadUtilIface" );
2113 
2114  // 1. # entities = E
2115  int num_ents = 0;
2116  unsigned char* buff_save = buff_ptr;
2117  int i, j;
2118 
2119  if( store_remote_handles )
2120  {
2121  UNPACK_INT( buff_ptr, num_ents );
2122 
2123  buff_save = buff_ptr;
2124 
2125  // Save place where remote handle info starts, then scan forward to ents
2126  for( i = 0; i < num_ents; i++ )
2127  {
2128  UNPACK_INT( buff_ptr, j );
2129  if( j < 0 )
2130  {
2131  std::cout << "Should be non-negative # proc/handles.";
2132  return MB_FAILURE;
2133  }
2134 
2135  buff_ptr += j * ( sizeof( int ) + sizeof( EntityHandle ) );
2136  }
2137  }
2138 
2139  std::vector< EntityHandle > msg_ents;
2140 
2141  while( !done )
2142  {
2143  EntityType this_type = MBMAXTYPE;
2144  UNPACK_TYPE( buff_ptr, this_type );
2145  assert( this_type != MBENTITYSET );
2146 
2147  // MBMAXTYPE signifies end of entities data
2148  if( MBMAXTYPE == this_type ) break;
2149 
2150  // Get the number of ents
2151  int num_ents2, verts_per_entity = 0;
2152  UNPACK_INT( buff_ptr, num_ents2 );
2153 
2154  // Unpack the nodes per entity
2155  if( MBVERTEX != this_type && num_ents2 )
2156  {
2157  UNPACK_INT( buff_ptr, verts_per_entity );
2158  }
2159 
2160  std::vector< int > ps( MAX_SHARING_PROCS, -1 );
2161  std::vector< EntityHandle > hs( MAX_SHARING_PROCS, 0 );
2162  for( int e = 0; e < num_ents2; e++ )
2163  {
2164  // Check for existing entity, otherwise make new one
2165  EntityHandle new_h = 0;
2167  double coords[3];
2168  int num_ps = -1;
2169 
2170  //=======================================
2171  // Unpack all the data at once, to make sure the buffer pointers
2172  // are tracked correctly
2173  //=======================================
2174  if( store_remote_handles )
2175  {
2176  // Pointers to other procs/handles
2177  UNPACK_INT( buff_save, num_ps );
2178  if( 0 >= num_ps )
2179  {
2180  std::cout << "Shouldn't ever be fewer than 1 procs here." << std::endl;
2181  return MB_FAILURE;
2182  }
2183 
2184  UNPACK_INTS( buff_save, &ps[0], num_ps );
2185  UNPACK_EH( buff_save, &hs[0], num_ps );
2186  }
2187 
2188  if( MBVERTEX == this_type )
2189  {
2190  UNPACK_DBLS( buff_ptr, coords, 3 );
2191  }
2192  else
2193  {
2194  assert( verts_per_entity <= CN::MAX_NODES_PER_ELEMENT );
2195  UNPACK_EH( buff_ptr, connect, verts_per_entity );
2196 
2197  // Update connectivity to local handles
2198  result = get_local_handles( connect, verts_per_entity, msg_ents );MB_CHK_SET_ERR( result, "Failed to get local handles" );
2199  }
2200 
2201  //=======================================
2202  // Now, process that data; begin by finding an identical
2203  // entity, if there is one
2204  //=======================================
2205  if( store_remote_handles )
2206  {
2207  result = find_existing_entity( is_iface, ps[0], hs[0], num_ps, connect, verts_per_entity, this_type,
2208  L2hloc, L2hrem, L2p, new_h );MB_CHK_SET_ERR( result, "Failed to get existing entity" );
2209  }
2210 
2211  //=======================================
2212  // If we didn't find one, we'll have to create one
2213  //=======================================
2214  bool created_here = false;
2215  if( !new_h && !is_iface )
2216  {
2217  if( MBVERTEX == this_type )
2218  {
2219  // Create a vertex
2220  result = mbImpl->create_vertex( coords, new_h );MB_CHK_SET_ERR( result, "Failed to make new vertex" );
2221  }
2222  else
2223  {
2224  // Create the element
2225  result = mbImpl->create_element( this_type, connect, verts_per_entity, new_h );MB_CHK_SET_ERR( result, "Failed to make new element" );
2226 
2227  // Update adjacencies
2228  result = ru->update_adjacencies( new_h, 1, verts_per_entity, connect );MB_CHK_SET_ERR( result, "Failed to update adjacencies" );
2229  }
2230 
2231  // Should have a new handle now
2232  assert( new_h );
2233 
2234  created_here = true;
2235  }
2236 
2237  //=======================================
2238  // Take care of sharing data
2239  //=======================================
2240 
2241  // Need to save entities found in order, for interpretation of
2242  // later parts of this message
2243  if( !is_iface )
2244  {
2245  assert( new_h );
2246  msg_ents.push_back( new_h );
2247  }
2248 
2249  if( created_here ) new_ents.push_back( new_h );
2250 
2251  if( new_h && store_remote_handles )
2252  {
2253  unsigned char new_pstat = 0x0;
2254  if( is_iface )
2255  {
2256  new_pstat = PSTATUS_INTERFACE;
2257  // Here, lowest rank proc should be first
2258  int idx = std::min_element( &ps[0], &ps[0] + num_ps ) - &ps[0];
2259  if( idx )
2260  {
2261  std::swap( ps[0], ps[idx] );
2262  std::swap( hs[0], hs[idx] );
2263  }
2264  // Set ownership based on lowest rank; can't be in update_remote_data, because
2265  // there we don't know whether it resulted from ghosting or not
2266  if( ( num_ps > 1 && ps[0] != (int)rank() ) ) new_pstat |= PSTATUS_NOT_OWNED;
2267  }
2268  else if( created_here )
2269  {
2270  if( created_iface )
2271  new_pstat = PSTATUS_NOT_OWNED;
2272  else
2273  new_pstat = PSTATUS_GHOST | PSTATUS_NOT_OWNED;
2274  }
2275 
2276  // Update sharing data and pstatus, adjusting order if iface
2277  result = update_remote_data( new_h, &ps[0], &hs[0], num_ps, new_pstat );MB_CHK_SET_ERR( result, "unpack_entities" );
2278 
2279  // If a new multi-shared entity, save owner for subsequent lookup in L2 lists
2280  if( store_remote_handles && !is_iface && num_ps > 2 )
2281  {
2282  L2hrem.push_back( hs[0] );
2283  L2hloc.push_back( new_h );
2284  L2p.push_back( ps[0] );
2285  }
2286 
2287  // Need to send this new handle to all sharing procs
2288  if( !is_iface )
2289  {
2290  for( j = 0; j < num_ps; j++ )
2291  {
2292  if( ps[j] == (int)procConfig.proc_rank() ) continue;
2293  int idx = get_buffers( ps[j] );
2294  if( idx == (int)L1hloc.size() )
2295  {
2296  L1hloc.resize( idx + 1 );
2297  L1hrem.resize( idx + 1 );
2298  L1p.resize( idx + 1 );
2299  }
2300 
2301  // Don't bother adding if it's already in the list
2302  std::vector< EntityHandle >::iterator vit =
2303  std::find( L1hloc[idx].begin(), L1hloc[idx].end(), new_h );
2304  if( vit != L1hloc[idx].end() )
2305  {
2306  // If it's in the list but remote handle isn't known but we know
2307  // it, replace in the list
2308  if( L1p[idx][vit - L1hloc[idx].begin()] != -1 && hs[j] )
2309  {
2310  L1hrem[idx][vit - L1hloc[idx].begin()] = hs[j];
2311  L1p[idx][vit - L1hloc[idx].begin()] = -1;
2312  }
2313  else
2314  continue;
2315  }
2316  else
2317  {
2318  if( !hs[j] )
2319  {
2320  assert( -1 != ps[0] && num_ps > 2 );
2321  L1p[idx].push_back( ps[0] );
2322  L1hrem[idx].push_back( hs[0] );
2323  }
2324  else
2325  {
2326  assert(
2327  "either this remote handle isn't in the remote list, or "
2328  "it's for another proc" &&
2329  ( std::find( L1hrem[idx].begin(), L1hrem[idx].end(), hs[j] ) == L1hrem[idx].end() ||
2330  L1p[idx][std::find( L1hrem[idx].begin(), L1hrem[idx].end(), hs[j] ) -
2331  L1hrem[idx].begin()] != -1 ) );
2332  L1p[idx].push_back( -1 );
2333  L1hrem[idx].push_back( hs[j] );
2334  }
2335  L1hloc[idx].push_back( new_h );
2336  }
2337  }
2338  }
2339 
2340  assert( "Shouldn't be here for non-shared entities" && -1 != num_ps );
2341  std::fill( &ps[0], &ps[num_ps], -1 );
2342  std::fill( &hs[0], &hs[num_ps], 0 );
2343  }
2344  }
2345 
2346  myDebug->tprintf( 4, "Unpacked %d ents of type %s", num_ents2, CN::EntityTypeName( this_type ) );
2347  }
2348 
2349  myDebug->tprintf( 4, "Done unpacking entities.\n" );
2350 
2351  // Need to sort here, to enable searching
2352  std::sort( new_ents.begin(), new_ents.end() );
2353 
2354  return MB_SUCCESS;
2355 }

References moab::Interface::create_element(), moab::Interface::create_vertex(), moab::CN::EntityTypeName(), ErrorCode, find_existing_entity(), get_buffers(), get_local_handles(), moab::CN::MAX_NODES_PER_ELEMENT, MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SUCCESS, MBENTITYSET, mbImpl, MBMAXTYPE, MBVERTEX, myDebug, moab::ProcConfig::proc_rank(), procConfig, PSTATUS_GHOST, PSTATUS_INTERFACE, PSTATUS_NOT_OWNED, moab::Interface::query_interface(), rank(), moab::DebugOutput::tprintf(), moab::UNPACK_DBLS(), moab::UNPACK_EH(), moab::UNPACK_INT(), moab::UNPACK_INTS(), moab::UNPACK_TYPE(), moab::ReadUtilIface::update_adjacencies(), and update_remote_data().

Referenced by exchange_ghost_cells(), and unpack_buffer().

◆ unpack_iface_entities()

ErrorCode moab::ParallelComm::unpack_iface_entities ( unsigned char *&  buff_ptr,
const int  from_proc,
const int  ind,
std::vector< EntityHandle > &  recd_ents 
)
private

for all the entities in the received buffer; for each, save entities in this instance which match connectivity, or zero if none found

◆ unpack_remote_handles() [1/2]

ErrorCode moab::ParallelComm::unpack_remote_handles ( unsigned int  from_proc,
const unsigned char *  buff_ptr,
std::vector< EntityHandle > &  L2hloc,
std::vector< EntityHandle > &  L2hrem,
std::vector< unsigned int > &  L2p 
)
inlineprivate

Definition at line 1651 of file ParallelComm.hpp.

1656 {
1657  // cast away const-ness, we won't be passing back a modified ptr
1658  unsigned char* tmp_buff = const_cast< unsigned char* >( buff_ptr );
1659  return unpack_remote_handles( from_proc, tmp_buff, L2hloc, L2hrem, L2p );
1660 }

References unpack_remote_handles().

◆ unpack_remote_handles() [2/2]

ErrorCode moab::ParallelComm::unpack_remote_handles ( unsigned int  from_proc,
unsigned char *&  buff_ptr,
std::vector< EntityHandle > &  L2hloc,
std::vector< EntityHandle > &  L2hrem,
std::vector< unsigned int > &  L2p 
)

Definition at line 7395 of file ParallelComm.cpp.

7400 {
7401  // Incoming remote handles; use to set remote handles
7402  int num_eh;
7403  UNPACK_INT( buff_ptr, num_eh );
7404 
7405  unsigned char* buff_proc = buff_ptr;
7406  buff_ptr += num_eh * sizeof( int );
7407  unsigned char* buff_rem = buff_ptr + num_eh * sizeof( EntityHandle );
7408  ErrorCode result;
7409  EntityHandle hpair[2], new_h;
7410  int proc;
7411  for( int i = 0; i < num_eh; i++ )
7412  {
7413  UNPACK_INT( buff_proc, proc );
7414  // Handles packed (local, remote), though here local is either on this
7415  // proc or owner proc, depending on value of proc (-1 = here, otherwise owner);
7416  // this is decoded in find_existing_entity
7417  UNPACK_EH( buff_ptr, hpair, 1 );
7418  UNPACK_EH( buff_rem, hpair + 1, 1 );
7419 
7420  if( -1 != proc )
7421  {
7422  result = find_existing_entity( false, proc, hpair[0], 3, NULL, 0, mbImpl->type_from_handle( hpair[1] ),
7423  L2hloc, L2hrem, L2p, new_h );MB_CHK_SET_ERR( result, "Didn't get existing entity" );
7424  if( new_h )
7425  hpair[0] = new_h;
7426  else
7427  hpair[0] = 0;
7428  }
7429  if( !( hpair[0] && hpair[1] ) ) return MB_FAILURE;
7430  int this_proc = from_proc;
7431  result = update_remote_data( hpair[0], &this_proc, hpair + 1, 1, 0 );MB_CHK_SET_ERR( result, "Failed to set remote data range on sent entities in ghost exchange" );
7432  }
7433 
7434  return MB_SUCCESS;
7435 }

References ErrorCode, find_existing_entity(), MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, moab::Interface::type_from_handle(), moab::UNPACK_EH(), moab::UNPACK_INT(), and update_remote_data().

Referenced by exchange_ghost_cells(), exchange_owned_mesh(), recv_entities(), recv_remote_handle_messages(), and unpack_remote_handles().

◆ unpack_sets()

ErrorCode moab::ParallelComm::unpack_sets ( unsigned char *&  buff_ptr,
std::vector< EntityHandle > &  entities,
const bool  store_handles,
const int  to_proc 
)
private

Definition at line 3310 of file ParallelComm.cpp.

3314 {
3315  // Now the sets; assume any sets the application wants to pass are in the entities list
3316  ErrorCode result;
3317 
3318  bool no_sets = ( entities.empty() || ( mbImpl->type_from_handle( *entities.rbegin() ) == MBENTITYSET ) );
3319 
3320  Range new_sets;
3321  int num_sets;
3322  UNPACK_INT( buff_ptr, num_sets );
3323 
3324  if( !num_sets ) return MB_SUCCESS;
3325 
3326  int i;
3327  Range::const_iterator rit;
3328  std::vector< EntityHandle > members;
3329  int num_ents;
3330  std::vector< unsigned int > options_vec( num_sets );
3331  // Option value
3332  if( num_sets ) UNPACK_VOID( buff_ptr, &options_vec[0], num_sets * sizeof( unsigned int ) );
3333 
3334  // Unpack parallel geometry unique id
3335  int n_uid;
3336  UNPACK_INT( buff_ptr, n_uid );
3337  if( n_uid > 0 && n_uid != num_sets )
3338  {
3339  std::cerr << "The number of Parallel geometry unique ids should be same." << std::endl;
3340  }
3341 
3342  if( n_uid > 0 )
3343  { // If parallel geometry unique id is packed
3344  std::vector< int > uids( n_uid );
3345  UNPACK_INTS( buff_ptr, &uids[0], n_uid );
3346 
3347  Tag uid_tag;
3348  result =
3349  mbImpl->tag_get_handle( "PARALLEL_UNIQUE_ID", 1, MB_TYPE_INTEGER, uid_tag, MB_TAG_SPARSE | MB_TAG_CREAT );MB_CHK_SET_ERR( result, "Failed to create parallel geometry unique id tag" );
3350 
3351  // Find existing sets
3352  for( i = 0; i < n_uid; i++ )
3353  {
3354  EntityHandle set_handle;
3355  Range temp_sets;
3356  void* tag_vals[] = { &uids[i] };
3357  if( uids[i] > 0 )
3358  {
3359  result = mbImpl->get_entities_by_type_and_tag( 0, MBENTITYSET, &uid_tag, tag_vals, 1, temp_sets );
3360  }
3361  if( !temp_sets.empty() )
3362  { // Existing set
3363  set_handle = *temp_sets.begin();
3364  }
3365  else
3366  { // Create a new set
3367  result = mbImpl->create_meshset( options_vec[i], set_handle );MB_CHK_SET_ERR( result, "Failed to create set in unpack" );
3368  result = mbImpl->tag_set_data( uid_tag, &set_handle, 1, &uids[i] );MB_CHK_SET_ERR( result, "Failed to set parallel geometry unique ids" );
3369  }
3370  new_sets.insert( set_handle );
3371  }
3372  }
3373  else
3374  {
3375  // Create sets
3376  for( i = 0; i < num_sets; i++ )
3377  {
3378  EntityHandle set_handle;
3379  result = mbImpl->create_meshset( options_vec[i], set_handle );MB_CHK_SET_ERR( result, "Failed to create set in unpack" );
3380 
3381  // Make sure new sets handles are monotonically increasing
3382  assert( set_handle > *new_sets.rbegin() );
3383  new_sets.insert( set_handle );
3384  }
3385  }
3386 
3387  std::copy( new_sets.begin(), new_sets.end(), std::back_inserter( entities ) );
3388  // Only need to sort if we came in with no sets on the end
3389  if( !no_sets ) std::sort( entities.begin(), entities.end() );
3390 
3391  for( rit = new_sets.begin(), i = 0; rit != new_sets.end(); ++rit, i++ )
3392  {
3393  // Unpack entities as vector, with length
3394  UNPACK_INT( buff_ptr, num_ents );
3395  members.resize( num_ents );
3396  if( num_ents ) UNPACK_EH( buff_ptr, &members[0], num_ents );
3397  result = get_local_handles( &members[0], num_ents, entities );MB_CHK_SET_ERR( result, "Failed to get local handles for ordered set contents" );
3398  result = mbImpl->add_entities( *rit, &members[0], num_ents );MB_CHK_SET_ERR( result, "Failed to add ents to ordered set in unpack" );
3399  }
3400 
3401  std::vector< int > num_pch( 2 * new_sets.size() );
3402  std::vector< int >::iterator vit;
3403  int tot_pch = 0;
3404  for( vit = num_pch.begin(); vit != num_pch.end(); ++vit )
3405  {
3406  UNPACK_INT( buff_ptr, *vit );
3407  tot_pch += *vit;
3408  }
3409 
3410  members.resize( tot_pch );
3411  UNPACK_EH( buff_ptr, &members[0], tot_pch );
3412  result = get_local_handles( &members[0], tot_pch, entities );MB_CHK_SET_ERR( result, "Failed to get local handle for parent/child sets" );
3413 
3414  int num = 0;
3415  EntityHandle* mem_ptr = &members[0];
3416  for( rit = new_sets.begin(); rit != new_sets.end(); ++rit )
3417  {
3418  // Unpack parents/children
3419  int num_par = num_pch[num++], num_child = num_pch[num++];
3420  if( num_par + num_child )
3421  {
3422  for( i = 0; i < num_par; i++ )
3423  {
3424  assert( 0 != mem_ptr[i] );
3425  result = mbImpl->add_parent_meshset( *rit, mem_ptr[i] );MB_CHK_SET_ERR( result, "Failed to add parent to set in unpack" );
3426  }
3427  mem_ptr += num_par;
3428  for( i = 0; i < num_child; i++ )
3429  {
3430  assert( 0 != mem_ptr[i] );
3431  result = mbImpl->add_child_meshset( *rit, mem_ptr[i] );MB_CHK_SET_ERR( result, "Failed to add child to set in unpack" );
3432  }
3433  mem_ptr += num_child;
3434  }
3435  }
3436 
3437  // Unpack source handles
3438  Range dum_range;
3439  if( store_remote_handles && !new_sets.empty() )
3440  {
3441  UNPACK_RANGE( buff_ptr, dum_range );
3442  result = update_remote_data( new_sets, dum_range, from_proc, 0 );MB_CHK_SET_ERR( result, "Failed to set sharing data for sets" );
3443  }
3444 
3445  myDebug->tprintf( 4, "Done unpacking sets." );
3446 
3447  return MB_SUCCESS;
3448 }

References moab::Interface::add_child_meshset(), moab::Interface::add_entities(), moab::Interface::add_parent_meshset(), moab::Range::begin(), moab::Interface::create_meshset(), moab::Range::empty(), moab::Range::end(), entities, ErrorCode, moab::Interface::get_entities_by_type_and_tag(), get_local_handles(), moab::Range::insert(), MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_CREAT, MB_TAG_SPARSE, MB_TYPE_INTEGER, MBENTITYSET, mbImpl, myDebug, moab::Range::rbegin(), moab::Range::size(), moab::Interface::tag_get_handle(), moab::Interface::tag_set_data(), moab::DebugOutput::tprintf(), moab::Interface::type_from_handle(), moab::UNPACK_EH(), moab::UNPACK_INT(), moab::UNPACK_INTS(), moab::UNPACK_RANGE(), moab::UNPACK_VOID(), and update_remote_data().

Referenced by unpack_buffer().

◆ unpack_tags()

ErrorCode moab::ParallelComm::unpack_tags ( unsigned char *&  buff_ptr,
std::vector< EntityHandle > &  entities,
const bool  store_handles,
const int  to_proc,
const MPI_Op *const  mpi_op = NULL 
)
private

Definition at line 3679 of file ParallelComm.cpp.

3684 {
3685  // Tags
3686  // Get all the tags
3687  // For dense tags, compute size assuming all entities have that tag
3688  // For sparse tags, get number of entities w/ that tag to compute size
3689 
3690  ErrorCode result;
3691 
3692  int num_tags;
3693  UNPACK_INT( buff_ptr, num_tags );
3694  std::vector< const void* > var_len_vals;
3695  std::vector< unsigned char > dum_vals;
3696  std::vector< EntityHandle > dum_ehvals;
3697 
3698  for( int i = 0; i < num_tags; i++ )
3699  {
3700  // Tag handle
3701  Tag tag_handle;
3702 
3703  // Size, data type
3704  int tag_size, tag_data_type, tag_type;
3705  UNPACK_INT( buff_ptr, tag_size );
3706  UNPACK_INT( buff_ptr, tag_type );
3707  UNPACK_INT( buff_ptr, tag_data_type );
3708 
3709  // Default value
3710  int def_val_size;
3711  UNPACK_INT( buff_ptr, def_val_size );
3712  void* def_val_ptr = NULL;
3713  if( def_val_size )
3714  {
3715  def_val_ptr = buff_ptr;
3716  buff_ptr += def_val_size;
3717  UPC( tag_size, " void" );
3718  }
3719 
3720  // Name
3721  int name_len;
3722  UNPACK_INT( buff_ptr, name_len );
3723  std::string tag_name( reinterpret_cast< char* >( buff_ptr ), name_len );
3724  buff_ptr += name_len;
3725  UPC( 64, " chars" );
3726 
3727  myDebug->tprintf( 4, "Unpacking tag %s\n", tag_name.c_str() );
3728 
3729  // Create the tag
3730  if( tag_size == MB_VARIABLE_LENGTH )
3731  result = mbImpl->tag_get_handle( tag_name.c_str(), def_val_size, (DataType)tag_data_type, tag_handle,
3732  MB_TAG_VARLEN | MB_TAG_CREAT | MB_TAG_BYTES | tag_type, def_val_ptr );
3733  else
3734  result = mbImpl->tag_get_handle( tag_name.c_str(), tag_size, (DataType)tag_data_type, tag_handle,
3735  MB_TAG_CREAT | MB_TAG_BYTES | tag_type, def_val_ptr );
3736  if( MB_SUCCESS != result ) return result;
3737 
3738  // Get handles and convert to local handles
3739  int num_ents;
3740  UNPACK_INT( buff_ptr, num_ents );
3741  std::vector< EntityHandle > dum_ents( num_ents );
3742  UNPACK_EH( buff_ptr, &dum_ents[0], num_ents );
3743 
3744  // In this case handles are indices into new entity range; need to convert
3745  // to local handles
3746  result = get_local_handles( &dum_ents[0], num_ents, entities );MB_CHK_SET_ERR( result, "Unable to convert to local handles" );
3747 
3748  // If it's a handle type, also convert tag vals in-place in buffer
3749  if( MB_TYPE_HANDLE == tag_type )
3750  {
3751  dum_ehvals.resize( num_ents );
3752  UNPACK_EH( buff_ptr, &dum_ehvals[0], num_ents );
3753  result = get_local_handles( &dum_ehvals[0], num_ents, entities );MB_CHK_SET_ERR( result, "Failed to get local handles for tag vals" );
3754  }
3755 
3756  DataType data_type;
3757  mbImpl->tag_get_data_type( tag_handle, data_type );
3758  int type_size = TagInfo::size_from_data_type( data_type );
3759 
3760  if( !dum_ents.empty() )
3761  {
3762  if( tag_size == MB_VARIABLE_LENGTH )
3763  {
3764  // Be careful of alignment here. If the integers are aligned
3765  // in the buffer, we can use them directly. Otherwise we must
3766  // copy them.
3767  std::vector< int > var_lengths( num_ents );
3768  UNPACK_INTS( buff_ptr, &var_lengths[0], num_ents );
3769  UPC( sizeof( int ) * num_ents, " void" );
3770 
3771  // Get pointers into buffer for each tag value
3772  var_len_vals.resize( num_ents );
3773  for( std::vector< EntityHandle >::size_type j = 0; j < (std::vector< EntityHandle >::size_type)num_ents;
3774  j++ )
3775  {
3776  var_len_vals[j] = buff_ptr;
3777  buff_ptr += var_lengths[j] * type_size;
3778  UPC( var_lengths[j], " void" );
3779  }
3780  result =
3781  mbImpl->tag_set_by_ptr( tag_handle, &dum_ents[0], num_ents, &var_len_vals[0], &var_lengths[0] );MB_CHK_SET_ERR( result, "Failed to set tag data when unpacking variable-length tag" );
3782  }
3783  else
3784  {
3785  // Get existing values of dst tag
3786  dum_vals.resize( tag_size * num_ents );
3787  if( mpi_op )
3788  {
3789  int tag_length;
3790  result = mbImpl->tag_get_length( tag_handle, tag_length );MB_CHK_SET_ERR( result, "Failed to get tag length" );
3791  result = mbImpl->tag_get_data( tag_handle, &dum_ents[0], num_ents, &dum_vals[0] );MB_CHK_SET_ERR( result, "Failed to get existing value of dst tag on entities" );
3792  result = reduce_void( tag_data_type, *mpi_op, tag_length * num_ents, &dum_vals[0], buff_ptr );MB_CHK_SET_ERR( result, "Failed to perform mpi op on dst tags" );
3793  }
3794  result = mbImpl->tag_set_data( tag_handle, &dum_ents[0], num_ents, buff_ptr );MB_CHK_SET_ERR( result, "Failed to set range-based tag data when unpacking tag" );
3795  buff_ptr += num_ents * tag_size;
3796  UPC( num_ents * tag_size, " void" );
3797  }
3798  }
3799  }
3800 
3801  myDebug->tprintf( 4, "Done unpacking tags.\n" );
3802 
3803  return MB_SUCCESS;
3804 }

References entities, ErrorCode, get_local_handles(), MB_CHK_SET_ERR, MB_SUCCESS, MB_TAG_BYTES, MB_TAG_CREAT, MB_TAG_VARLEN, MB_TYPE_HANDLE, MB_VARIABLE_LENGTH, mbImpl, myDebug, reduce_void(), moab::TagInfo::size_from_data_type(), moab::Interface::tag_get_data(), moab::Interface::tag_get_data_type(), moab::Interface::tag_get_handle(), moab::Interface::tag_get_length(), moab::Interface::tag_set_by_ptr(), moab::Interface::tag_set_data(), moab::DebugOutput::tprintf(), moab::UNPACK_EH(), moab::UNPACK_INT(), moab::UNPACK_INTS(), and UPC.

Referenced by exchange_tags(), reduce_tags(), and unpack_buffer().

◆ update_iface_sets()

ErrorCode moab::ParallelComm::update_iface_sets ( Range sent_ents,
std::vector< EntityHandle > &  remote_handles,
int  from_proc 
)
private

for any remote_handles set to zero, remove corresponding sent_ents from iface_sets corresponding to from_proc

◆ update_remote_data() [1/3]

ErrorCode moab::ParallelComm::update_remote_data ( const EntityHandle  new_h,
const int *  ps,
const EntityHandle hs,
const int  num_ps,
const unsigned char  add_pstat 
)
private

Definition at line 2646 of file ParallelComm.cpp.

2658 {
2659  // Get initial sharing data; tag_ps and tag_hs get terminated with -1 and 0
2660  // in this function, so no need to initialize; sharing data does not include
2661  // this proc if shared with only one other
2662 
2663  // Following variables declared here to avoid compiler errors
2664  int new_numps;
2665  unsigned char new_pstat;
2666  std::vector< int > new_ps( MAX_SHARING_PROCS, -1 );
2667  std::vector< EntityHandle > new_hs( MAX_SHARING_PROCS, 0 );
2668 
2669  new_numps = 0;
2670  ErrorCode result = get_sharing_data( new_h, &new_ps[0], &new_hs[0], new_pstat, new_numps );MB_CHK_SET_ERR( result, "Failed to get sharing data in update_remote_data" );
2671  int num_exist = new_numps;
2672 
2673  // Add new pstat info to the flag
2674  new_pstat |= add_pstat;
2675 
2676  /*
2677  #define plist(str, lst, siz) \
2678  std::cout << str << "("; \
2679  for (int i = 0; i < (int)siz; i++) std::cout << lst[i] << " "; \
2680  std::cout << ") "; \
2681 
2682  std::cout << "update_remote_data: rank = " << rank() << ", new_h = " << new_h << std::endl;
2683  std::string ostr;
2684  plist("ps", ps, num_ps);
2685  plist("hs", hs, num_ps);
2686  print_pstatus(add_pstat, ostr);
2687  std::cout << ", add_pstat = " << ostr.c_str() << std::endl;
2688  plist("tag_ps", new_ps, new_numps);
2689  plist("tag_hs", new_hs, new_numps);
2690  assert(new_numps <= size());
2691  print_pstatus(new_pstat, ostr);
2692  std::cout << ", tag_pstat=" << ostr.c_str() << std::endl;
2693  */
2694 
2695 #ifndef NDEBUG
2696  {
2697  // Check for duplicates in proc list
2698  std::set< unsigned int > dumprocs;
2699  unsigned int dp = 0;
2700  for( ; (int)dp < num_ps && -1 != ps[dp]; dp++ )
2701  dumprocs.insert( ps[dp] );
2702  assert( dp == dumprocs.size() );
2703  }
2704 #endif
2705 
2706  // If only one sharer and I'm the owner, insert myself in the list;
2707  // otherwise, my data is checked at the end
2708  if( 1 == new_numps && !( new_pstat & PSTATUS_NOT_OWNED ) )
2709  {
2710  new_hs[1] = new_hs[0];
2711  new_ps[1] = new_ps[0];
2712  new_hs[0] = new_h;
2713  new_ps[0] = rank();
2714  new_numps = 2;
2715  }
2716 
2717  // Now put passed-in data onto lists
2718  int idx;
2719  for( int i = 0; i < num_ps; i++ )
2720  {
2721  idx = std::find( &new_ps[0], &new_ps[0] + new_numps, ps[i] ) - &new_ps[0];
2722  if( idx < new_numps )
2723  {
2724  if( !new_hs[idx] && hs[i] )
2725  // h on list is 0 and passed-in h is non-zero, replace it
2726  new_hs[idx] = hs[i];
2727  else
2728  assert( !hs[i] || new_hs[idx] == hs[i] );
2729  }
2730  else
2731  {
2732  if( new_numps + 1 == MAX_SHARING_PROCS )
2733  {
2734  MB_SET_ERR( MB_FAILURE, "Exceeded MAX_SHARING_PROCS for "
2735  << CN::EntityTypeName( TYPE_FROM_HANDLE( new_h ) ) << ' '
2736  << ID_FROM_HANDLE( new_h ) << " in process " << rank() );
2737  }
2738  new_ps[new_numps] = ps[i];
2739  new_hs[new_numps] = hs[i];
2740  new_numps++;
2741  }
2742  }
2743 
2744  // Add myself, if it isn't there already
2745  idx = std::find( &new_ps[0], &new_ps[0] + new_numps, rank() ) - &new_ps[0];
2746  if( idx == new_numps )
2747  {
2748  new_ps[new_numps] = rank();
2749  new_hs[new_numps] = new_h;
2750  new_numps++;
2751  }
2752  else if( !new_hs[idx] && new_numps > 2 )
2753  new_hs[idx] = new_h;
2754 
2755  // Proc list is complete; update for shared, multishared
2756  if( new_numps > 1 )
2757  {
2758  if( new_numps > 2 ) new_pstat |= PSTATUS_MULTISHARED;
2759  new_pstat |= PSTATUS_SHARED;
2760  }
2761 
2762  /*
2763  plist("new_ps", new_ps, new_numps);
2764  plist("new_hs", new_hs, new_numps);
2765  print_pstatus(new_pstat, ostr);
2766  std::cout << ", new_pstat=" << ostr.c_str() << std::endl;
2767  std::cout << std::endl;
2768  */
2769 
2770  result = set_sharing_data( new_h, new_pstat, num_exist, new_numps, &new_ps[0], &new_hs[0] );MB_CHK_SET_ERR( result, "Failed to set sharing data in update_remote_data" );
2771 
2772  if( new_pstat & PSTATUS_SHARED ) sharedEnts.insert( new_h );
2773 
2774  return MB_SUCCESS;
2775 }

References moab::CN::EntityTypeName(), ErrorCode, get_sharing_data(), moab::ID_FROM_HANDLE(), MAX_SHARING_PROCS, MB_CHK_SET_ERR, MB_SET_ERR, MB_SUCCESS, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, rank(), set_sharing_data(), sharedEnts, and moab::TYPE_FROM_HANDLE().

◆ update_remote_data() [2/3]

ErrorCode moab::ParallelComm::update_remote_data ( EntityHandle  entity,
std::vector< int > &  procs,
std::vector< EntityHandle > &  handles 
)

Definition at line 1019 of file ParallelComm.cpp.

1022 {
1023  ErrorCode error;
1024  unsigned char pstatus = PSTATUS_INTERFACE;
1025 
1026  int procmin = *std::min_element( procs.begin(), procs.end() );
1027 
1028  if( (int)rank() > procmin )
1029  pstatus |= PSTATUS_NOT_OWNED;
1030  else
1031  procmin = rank();
1032 
1033  // DBG
1034  // std::cout<<"entity = "<<entity<<std::endl;
1035  // for (int j=0; j<procs.size(); j++)
1036  // std::cout<<"procs["<<j<<"] = "<<procs[j]<<", handles["<<j<<"] = "<<handles[j]<<std::endl;
1037  // DBG
1038 
1039  if( (int)procs.size() > 1 )
1040  {
1041  procs.push_back( rank() );
1042  handles.push_back( entity );
1043 
1044  int idx = std::find( procs.begin(), procs.end(), procmin ) - procs.begin();
1045 
1046  std::iter_swap( procs.begin(), procs.begin() + idx );
1047  std::iter_swap( handles.begin(), handles.begin() + idx );
1048 
1049  // DBG
1050  // std::cout<<"entity = "<<entity<<std::endl;
1051  // for (int j=0; j<procs.size(); j++)
1052  // std::cout<<"procs["<<j<<"] = "<<procs[j]<<", handles["<<j<<"] = "<<handles[j]<<std::endl;
1053  // DBG
1054  }
1055 
1056  // if ((entity == 10388) && (rank()==1))
1057  // std::cout<<"Here"<<std::endl;
1058 
1059  error = update_remote_data( entity, &procs[0], &handles[0], procs.size(), pstatus );MB_CHK_ERR( error );
1060 
1061  return MB_SUCCESS;
1062 }

References moab::error(), ErrorCode, MB_CHK_ERR, MB_SUCCESS, PSTATUS_INTERFACE, PSTATUS_NOT_OWNED, and rank().

Referenced by resolve_shared_ents(), unpack_entities(), unpack_remote_handles(), unpack_sets(), and update_remote_data().

◆ update_remote_data() [3/3]

ErrorCode moab::ParallelComm::update_remote_data ( Range local_range,
Range remote_range,
int  other_proc,
const unsigned char  add_pstat 
)
private

Definition at line 2629 of file ParallelComm.cpp.

2633 {
2634  Range::iterator rit, rit2;
2635  ErrorCode result = MB_SUCCESS;
2636 
2637  // For each pair of local/remote handles:
2638  for( rit = local_range.begin(), rit2 = remote_range.begin(); rit != local_range.end(); ++rit, ++rit2 )
2639  {
2640  result = update_remote_data( *rit, &other_proc, &( *rit2 ), 1, add_pstat );MB_CHK_ERR( result );
2641  }
2642 
2643  return MB_SUCCESS;
2644 }

References moab::Range::begin(), moab::Range::end(), ErrorCode, MB_CHK_ERR, MB_SUCCESS, and update_remote_data().

◆ update_remote_data_old()

ErrorCode moab::ParallelComm::update_remote_data_old ( const EntityHandle  new_h,
const int *  ps,
const EntityHandle hs,
const int  num_ps,
const unsigned char  add_pstat 
)
private

Definition at line 2777 of file ParallelComm.cpp.

2782 {
2784  int tag_ps[MAX_SHARING_PROCS];
2785  unsigned char pstat;
2786  // Get initial sharing data; tag_ps and tag_hs get terminated with -1 and 0
2787  // in this function, so no need to initialize
2788  unsigned int num_exist;
2789  ErrorCode result = get_sharing_data( new_h, tag_ps, tag_hs, pstat, num_exist );MB_CHK_ERR( result );
2790 
2791 #ifndef NDEBUG
2792  {
2793  // Check for duplicates in proc list
2794  std::set< unsigned int > dumprocs;
2795  unsigned int dp = 0;
2796  for( ; (int)dp < num_ps && -1 != ps[dp]; dp++ )
2797  dumprocs.insert( ps[dp] );
2798  assert( dp == dumprocs.size() );
2799  }
2800 #endif
2801 
2802  // Add any new sharing data
2803  bool changed = false;
2804  int idx;
2805  if( !num_exist )
2806  {
2807  // Just take what caller passed
2808  memcpy( tag_ps, ps, num_ps * sizeof( int ) );
2809  memcpy( tag_hs, hs, num_ps * sizeof( EntityHandle ) );
2810  num_exist = num_ps;
2811  // If it's only one, hopefully I'm not there yet...
2812  assert( "I shouldn't be the only proc there." && ( 1 != num_exist || ps[0] != (int)procConfig.proc_rank() ) );
2813  changed = true;
2814  }
2815  else
2816  {
2817  for( int i = 0; i < num_ps; i++ )
2818  {
2819  idx = std::find( tag_ps, tag_ps + num_exist, ps[i] ) - tag_ps;
2820  if( idx == (int)num_exist )
2821  {
2822  if( num_exist == MAX_SHARING_PROCS )
2823  {
2824  std::cerr << "Exceeded MAX_SHARING_PROCS for " << CN::EntityTypeName( TYPE_FROM_HANDLE( new_h ) )
2825  << ' ' << ID_FROM_HANDLE( new_h ) << " in process " << proc_config().proc_rank()
2826  << std::endl;
2827  std::cerr.flush();
2828  MPI_Abort( proc_config().proc_comm(), 66 );
2829  }
2830 
2831  // If there's only 1 sharing proc, and it's not me, then
2832  // we'll end up with 3; add me to the front
2833  if( !i && num_ps == 1 && num_exist == 1 && ps[0] != (int)procConfig.proc_rank() )
2834  {
2835  int j = 1;
2836  // If I own this entity, put me at front, otherwise after first
2837  if( !( pstat & PSTATUS_NOT_OWNED ) )
2838  {
2839  tag_ps[1] = tag_ps[0];
2840  tag_hs[1] = tag_hs[0];
2841  j = 0;
2842  }
2843  tag_ps[j] = procConfig.proc_rank();
2844  tag_hs[j] = new_h;
2845  num_exist++;
2846  }
2847 
2848  tag_ps[num_exist] = ps[i];
2849  tag_hs[num_exist] = hs[i];
2850  num_exist++;
2851  changed = true;
2852  }
2853  else if( 0 == tag_hs[idx] )
2854  {
2855  tag_hs[idx] = hs[i];
2856  changed = true;
2857  }
2858  else if( 0 != hs[i] )
2859  {
2860  assert( hs[i] == tag_hs[idx] );
2861  }
2862  }
2863  }
2864 
2865  // Adjust for interface layer if necessary
2866  if( add_pstat & PSTATUS_INTERFACE )
2867  {
2868  idx = std::min_element( tag_ps, tag_ps + num_exist ) - tag_ps;
2869  if( idx )
2870  {
2871  int tag_proc = tag_ps[idx];
2872  tag_ps[idx] = tag_ps[0];
2873  tag_ps[0] = tag_proc;
2874  EntityHandle tag_h = tag_hs[idx];
2875  tag_hs[idx] = tag_hs[0];
2876  tag_hs[0] = tag_h;
2877  changed = true;
2878  if( tag_ps[0] != (int)procConfig.proc_rank() ) pstat |= PSTATUS_NOT_OWNED;
2879  }
2880  }
2881 
2882  if( !changed ) return MB_SUCCESS;
2883 
2884  assert( "interface entities should have > 1 proc" && ( !( add_pstat & PSTATUS_INTERFACE ) || num_exist > 1 ) );
2885  assert( "ghost entities should have > 1 proc" && ( !( add_pstat & PSTATUS_GHOST ) || num_exist > 1 ) );
2886 
2887  // If it's multi-shared and we created the entity in this unpack,
2888  // local handle probably isn't in handle list yet
2889  if( num_exist > 2 )
2890  {
2891  idx = std::find( tag_ps, tag_ps + num_exist, procConfig.proc_rank() ) - tag_ps;
2892  assert( idx < (int)num_exist );
2893  if( !tag_hs[idx] ) tag_hs[idx] = new_h;
2894  }
2895 
2896  int tag_p;
2897  EntityHandle tag_h;
2898 
2899  // Update pstat
2900  pstat |= add_pstat;
2901 
2902  if( num_exist > 2 )
2903  pstat |= ( PSTATUS_MULTISHARED | PSTATUS_SHARED );
2904  else if( num_exist > 0 )
2905  pstat |= PSTATUS_SHARED;
2906 
2907  // compare_remote_data(new_h, num_ps, hs, ps, add_pstat,
2908  // num_exist, tag_hs, tag_ps, pstat);
2909 
2910  // Reset single shared proc/handle if was shared and moving to multi-shared
2911  if( num_exist > 2 && !( pstat & PSTATUS_MULTISHARED ) && ( pstat & PSTATUS_SHARED ) )
2912  {
2913  // Must remove sharedp/h first, which really means set to default value
2914  tag_p = -1;
2915  result = mbImpl->tag_set_data( sharedp_tag(), &new_h, 1, &tag_p );MB_CHK_SET_ERR( result, "Failed to set sharedp tag data" );
2916  tag_h = 0;
2917  result = mbImpl->tag_set_data( sharedh_tag(), &new_h, 1, &tag_h );MB_CHK_SET_ERR( result, "Failed to set sharedh tag data" );
2918  }
2919 
2920  // Set sharing tags
2921  if( num_exist > 2 )
2922  {
2923  std::fill( tag_ps + num_exist, tag_ps + MAX_SHARING_PROCS, -1 );
2924  std::fill( tag_hs + num_exist, tag_hs + MAX_SHARING_PROCS, 0 );
2925  result = mbImpl->tag_set_data( sharedps_tag(), &new_h, 1, tag_ps );MB_CHK_SET_ERR( result, "Failed to set sharedps tag data" );
2926  result = mbImpl->tag_set_data( sharedhs_tag(), &new_h, 1, tag_hs );MB_CHK_SET_ERR( result, "Failed to set sharedhs tag data" );
2927 
2928 #ifndef NDEBUG
2929  {
2930  // Check for duplicates in proc list
2931  std::set< unsigned int > dumprocs;
2932  unsigned int dp = 0;
2933  for( ; dp < num_exist && -1 != tag_ps[dp]; dp++ )
2934  dumprocs.insert( tag_ps[dp] );
2935  assert( dp == dumprocs.size() );
2936  }
2937 #endif
2938  }
2939  else if( num_exist == 2 || num_exist == 1 )
2940  {
2941  if( tag_ps[0] == (int)procConfig.proc_rank() )
2942  {
2943  assert( 2 == num_exist && tag_ps[1] != (int)procConfig.proc_rank() );
2944  tag_ps[0] = tag_ps[1];
2945  tag_hs[0] = tag_hs[1];
2946  }
2947  assert( tag_ps[0] != -1 && tag_hs[0] != 0 );
2948  result = mbImpl->tag_set_data( sharedp_tag(), &new_h, 1, tag_ps );MB_CHK_SET_ERR( result, "Failed to set sharedp tag data" );
2949  result = mbImpl->tag_set_data( sharedh_tag(), &new_h, 1, tag_hs );MB_CHK_SET_ERR( result, "Failed to set sharedh tag data" );
2950  }
2951 
2952  // Now set new pstatus
2953  result = mbImpl->tag_set_data( pstatus_tag(), &new_h, 1, &pstat );MB_CHK_SET_ERR( result, "Failed to set pstatus tag data" );
2954 
2955  if( pstat & PSTATUS_SHARED ) sharedEnts.insert( new_h );
2956 
2957  return MB_SUCCESS;
2958 }

References moab::CN::EntityTypeName(), ErrorCode, get_sharing_data(), moab::ID_FROM_HANDLE(), MAX_SHARING_PROCS, MB_CHK_ERR, MB_CHK_SET_ERR, MB_SUCCESS, mbImpl, proc_config(), moab::ProcConfig::proc_rank(), procConfig, PSTATUS_GHOST, PSTATUS_INTERFACE, PSTATUS_MULTISHARED, PSTATUS_NOT_OWNED, PSTATUS_SHARED, pstatus_tag(), sharedEnts, sharedh_tag(), sharedhs_tag(), sharedp_tag(), sharedps_tag(), moab::Interface::tag_set_data(), and moab::TYPE_FROM_HANDLE().

Friends And Related Function Documentation

◆ ParallelMergeMesh

friend class ParallelMergeMesh
friend

Definition at line 57 of file ParallelComm.hpp.

Member Data Documentation

◆ ackbuff

int moab::ParallelComm::ackbuff
private

Definition at line 1475 of file ParallelComm.hpp.

Referenced by recv_entities(), and send_entities().

◆ buffProcs

◆ errorHandler

Error* moab::ParallelComm::errorHandler
private

Error handler.

Definition at line 1438 of file ParallelComm.hpp.

Referenced by initialize(), and packed_tag_size().

◆ globalPartCount

int moab::ParallelComm::globalPartCount
private

Cache of global part count.

Definition at line 1467 of file ParallelComm.hpp.

Referenced by collective_sync_partition(), create_part(), destroy_part(), and get_global_part_count().

◆ ifaceSetsTag

Tag moab::ParallelComm::ifaceSetsTag
private

Definition at line 1465 of file ParallelComm.hpp.

◆ INITIAL_BUFF_SIZE

◆ interfaceSets

◆ localOwnedBuffs

◆ mbImpl

Interface* moab::ParallelComm::mbImpl
private

◆ myDebug

◆ myFile

std::ofstream moab::ParallelComm::myFile
private

Definition at line 1471 of file ParallelComm.hpp.

◆ partitioningSet

EntityHandle moab::ParallelComm::partitioningSet
private

entity set containing all parts

Definition at line 1469 of file ParallelComm.hpp.

Referenced by get_partitioning(), and set_partitioning().

◆ partitionSets

Range moab::ParallelComm::partitionSets
private

the partition, interface sets for this comm'n instance

Definition at line 1459 of file ParallelComm.hpp.

Referenced by get_part_entities(), partition_sets(), resolve_shared_ents(), and moab::ParallelMergeMesh::TagSharedElements().

◆ partitionTag

Tag moab::ParallelComm::partitionTag
private

Definition at line 1465 of file ParallelComm.hpp.

Referenced by partition_tag().

◆ pcommID

int moab::ParallelComm::pcommID
private

Definition at line 1473 of file ParallelComm.hpp.

Referenced by get_id(), initialize(), and ParallelComm().

◆ PROC_OWNER

unsigned char moab::ParallelComm::PROC_OWNER
static

Definition at line 88 of file ParallelComm.hpp.

◆ PROC_SHARED

unsigned char moab::ParallelComm::PROC_SHARED
static

Definition at line 88 of file ParallelComm.hpp.

◆ procConfig

◆ pstatusTag

Tag moab::ParallelComm::pstatusTag
private

Definition at line 1465 of file ParallelComm.hpp.

Referenced by pstatus_tag().

◆ recvRemotehReqs

std::vector< MPI_Request > moab::ParallelComm::recvRemotehReqs
private

Definition at line 1453 of file ParallelComm.hpp.

Referenced by exchange_owned_meshs(), post_irecv(), recv_entities(), and send_entities().

◆ recvReqs

std::vector< MPI_Request > moab::ParallelComm::recvReqs
private

receive request objects

Definition at line 1453 of file ParallelComm.hpp.

Referenced by exchange_owned_meshs(), post_irecv(), recv_entities(), recv_messages(), send_entities(), and set_recv_request().

◆ remoteOwnedBuffs

◆ sendReqs

std::vector< MPI_Request > moab::ParallelComm::sendReqs
private

◆ sequenceManager

SequenceManager* moab::ParallelComm::sequenceManager
private

Sequence manager, to get more efficient access to entities.

Definition at line 1435 of file ParallelComm.hpp.

Referenced by get_tag_send_list(), initialize(), pack_entities(), and packed_tag_size().

◆ sharedEnts

◆ sharedhsTag

Tag moab::ParallelComm::sharedhsTag
private

Definition at line 1465 of file ParallelComm.hpp.

Referenced by sharedhs_tag().

◆ sharedhTag

Tag moab::ParallelComm::sharedhTag
private

Definition at line 1465 of file ParallelComm.hpp.

Referenced by sharedh_tag().

◆ sharedpsTag

Tag moab::ParallelComm::sharedpsTag
private

Definition at line 1465 of file ParallelComm.hpp.

Referenced by sharedps_tag().

◆ sharedpTag

Tag moab::ParallelComm::sharedpTag
private

tags used to save sharing procs and handles

Definition at line 1465 of file ParallelComm.hpp.

Referenced by sharedp_tag().

◆ sharedSetData


The documentation for this class was generated from the following files: