libMesh
Public Member Functions | List of all members
libMesh::MeshCommunication Class Reference

This is the MeshCommunication class. More...

#include <mesh_communication.h>

Public Member Functions

 MeshCommunication ()=default
 Constructor. More...
 
 ~MeshCommunication ()=default
 Destructor. More...
 
void clear ()
 Clears all data structures and resets to a pristine state. More...
 
void broadcast (MeshBase &) const
 Finds all the processors that may contain elements that neighbor my elements. More...
 
void redistribute (DistributedMesh &mesh, bool newly_coarsened_only=false) const
 This method takes a parallel distributed mesh and redistributes the elements. More...
 
void gather_neighboring_elements (DistributedMesh &) const
 
void send_coarse_ghosts (MeshBase &) const
 Examine a just-coarsened mesh, and for any newly-coarsened elements, send the associated ghosted elements to the processor which needs them. More...
 
void gather (const processor_id_type root_id, MeshBase &) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void allgather (MeshBase &mesh) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void delete_remote_elements (DistributedMesh &, const std::set< Elem *> &) const
 This method takes an input DistributedMesh which may be distributed among all the processors. More...
 
void assign_global_indices (MeshBase &) const
 This method assigns globally unique, partition-agnostic indices to the nodes and elements in the mesh. More...
 
void check_for_duplicate_global_indices (MeshBase &) const
 Throw an error if we have any index clashes in the numbering used by assign_global_indices. More...
 
template<typename ForwardIterator >
void find_local_indices (const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::unordered_map< dof_id_type, dof_id_type > &) const
 This method determines a locally unique, contiguous index for each object in the input range. More...
 
template<typename ForwardIterator >
void find_global_indices (const Parallel::Communicator &communicator, const libMesh::BoundingBox &, const ForwardIterator &, const ForwardIterator &, std::vector< dof_id_type > &) const
 This method determines a globally unique, partition-agnostic index for each object in the input range. More...
 
void make_elems_parallel_consistent (MeshBase &)
 Copy ids of ghost elements from their local processors. More...
 
void make_p_levels_parallel_consistent (MeshBase &)
 Copy p levels of ghost elements from their local processors. More...
 
void make_node_ids_parallel_consistent (MeshBase &)
 Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all other ids parallel consistent. More...
 
void make_node_unique_ids_parallel_consistent (MeshBase &)
 Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all ghost unique_ids parallel consistent. More...
 
void make_node_bcids_parallel_consistent (MeshBase &)
 Assuming all processor ids are parallel consistent, this function makes all ghost boundary ids on nodes parallel consistent. More...
 
void make_node_proc_ids_parallel_consistent (MeshBase &)
 Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well. More...
 
void make_new_node_proc_ids_parallel_consistent (MeshBase &)
 Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well. More...
 
void make_nodes_parallel_consistent (MeshBase &)
 Copy processor_ids and ids on ghost nodes from their local processors. More...
 
void make_new_nodes_parallel_consistent (MeshBase &)
 Copy processor_ids and ids on new nodes from their local processors. More...
 

Detailed Description

This is the MeshCommunication class.

It handles all the details of communicating mesh information from one processor to another. All parallelization of the Mesh data structures is done via this class.

Author
Benjamin S. Kirk
Date
2003

Definition at line 50 of file mesh_communication.h.

Constructor & Destructor Documentation

◆ MeshCommunication()

libMesh::MeshCommunication::MeshCommunication ( )
default

Constructor.

◆ ~MeshCommunication()

libMesh::MeshCommunication::~MeshCommunication ( )
default

Destructor.

Member Function Documentation

◆ allgather()

void libMesh::MeshCommunication::allgather ( MeshBase mesh) const
inline

This method takes an input DistributedMesh which may be distributed among all the processors.

Each processor then sends its local nodes and elements to the other processors. The end result is that a previously distributed DistributedMesh will be serialized on each processor. Since this method is collective it must be called by all processors.

Definition at line 135 of file mesh_communication.h.

References gather(), libMesh::DofObject::invalid_processor_id, and mesh.

Referenced by libMesh::Nemesis_IO::read().

MeshBase & mesh
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:488
void gather(const processor_id_type root_id, MeshBase &) const
This method takes an input DistributedMesh which may be distributed among all the processors...

◆ assign_global_indices()

void libMesh::MeshCommunication::assign_global_indices ( MeshBase mesh) const

This method assigns globally unique, partition-agnostic indices to the nodes and elements in the mesh.

The approach is to compute the Hilbert space-filling curve key and use its value to assign an index in [0,N_global). Since the Hilbert key is unique for each spatial location, two objects occupying the same location will be assigned the same global id. Thus, this method can also be useful for identifying duplicate nodes which may occur during parallel refinement.

Definition at line 204 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Sort< KeyType, IdxType >::bin(), libMesh::ParallelObject::comm(), communicator, libMesh::MeshTools::create_nodal_bounding_box(), distance(), libMesh::MeshTools::Generation::Private::idx(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_equal_connectivity(), libMesh::MeshTools::libmesh_assert_equal_points(), libMesh::make_range(), mesh, libMesh::MeshBase::n_elem(), libMesh::MeshBase::n_nodes(), libMesh::Threads::parallel_for(), and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by libMesh::MeshTools::Private::globally_renumber_nodes_and_elements().

205 {
206  LOG_SCOPE ("assign_global_indices()", "MeshCommunication");
207 
208  // This method determines partition-agnostic global indices
209  // for nodes and elements.
210 
211  // Algorithm:
212  // (1) compute the Hilbert key for each local node/element
213  // (2) perform a parallel sort of the Hilbert key
214  // (3) get the min/max value on each processor
215  // (4) determine the position in the global ranking for
216  // each local object
217 
219 
220 #ifdef DEBUG
221  // This is going to be a mess if geometry is out of sync
224 #endif
225 
226  // Global bounding box. We choose the nodal bounding box for
227  // backwards compatibility; the element bounding box may be looser
228  // on curved elements.
229  const BoundingBox bbox =
231 
232  const Point bboxinv = invert_bbox(bbox);
233 
234  //-------------------------------------------------------------
235  // (1) compute Hilbert keys
236  std::vector<Parallel::DofObjectKey>
237  node_keys, elem_keys;
238 
239  {
240  // Nodes first
241  {
242  ConstNodeRange nr (mesh.local_nodes_begin(),
243  mesh.local_nodes_end());
244  node_keys.resize (nr.size());
245  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, bboxinv, node_keys));
246 
247  // // It's O(N^2) to check that these keys don't duplicate before the
248  // // sort...
249  //
250  // MeshBase::const_node_iterator nodei = mesh.local_nodes_begin();
251  // for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
252  // {
253  // MeshBase::const_node_iterator nodej = mesh.local_nodes_begin();
254  // for (std::size_t j = 0; j != i; ++j, ++nodej)
255  // {
256  // if (node_keys[i] == node_keys[j])
257  // {
258  // CFixBitVec icoords[3], jcoords[3];
259  // get_hilbert_coords(**nodej, bbox, bboxinv, jcoords);
260  // libMesh::err << "node " << (*nodej)->id() << ", " << static_cast<Point &>(**nodej) << " has HilbertIndices " << node_keys[j] << std::endl;
261  // get_hilbert_coords(**nodei, bbox, bboxinv, icoords);
262  // libMesh::err << "node " << (*nodei)->id() << ", " << static_cast<Point &>(**nodei) << " has HilbertIndices " << node_keys[i] << std::endl;
263  // libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
264  // }
265  // }
266  // }
267  }
268 
269  // Elements next
270  {
271  ConstElemRange er (mesh.local_elements_begin(),
272  mesh.local_elements_end());
273  elem_keys.resize (er.size());
274  Threads::parallel_for (er, ComputeHilbertKeys (bbox, bboxinv, elem_keys));
275 
276  // // For elements, the keys can be (and in the case of TRI, are
277  // // expected to be) duplicates, but only if the elements are at
278  // // different levels
279  // MeshBase::const_element_iterator elemi = mesh.local_elements_begin();
280  // for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
281  // {
282  // MeshBase::const_element_iterator elemj = mesh.local_elements_begin();
283  // for (std::size_t j = 0; j != i; ++j, ++elemj)
284  // {
285  // if ((elem_keys[i] == elem_keys[j]) &&
286  // ((*elemi)->level() == (*elemj)->level()))
287  // {
288  // libMesh::err << "level " << (*elemj)->level()
289  // << " elem\n" << (**elemj)
290  // << " vertex average " << (*elemj)->vertex_average()
291  // << " has HilbertIndices " << elem_keys[j]
292  // << " or " << get_dofobject_key((**elemj), bbox, bboxinv)
293  // << std::endl;
294  // libMesh::err << "level " << (*elemi)->level()
295  // << " elem\n" << (**elemi)
296  // << " vertex average " << (*elemi)->vertex_average()
297  // << " has HilbertIndices " << elem_keys[i]
298  // << " or " << get_dofobject_key((**elemi), bbox, bboxinv)
299  // << std::endl;
300  // libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
301  // }
302  // }
303  // }
304  }
305  } // done computing Hilbert keys
306 
307 
308 
309  //-------------------------------------------------------------
310  // (2) parallel sort the Hilbert keys
311  Parallel::Sort<Parallel::DofObjectKey> node_sorter (communicator,
312  node_keys);
313  node_sorter.sort(); /* done with node_keys */ //node_keys.clear();
314 
315  const std::vector<Parallel::DofObjectKey> & my_node_bin =
316  node_sorter.bin();
317 
318  Parallel::Sort<Parallel::DofObjectKey> elem_sorter (communicator,
319  elem_keys);
320  elem_sorter.sort(); /* done with elem_keys */ //elem_keys.clear();
321 
322  const std::vector<Parallel::DofObjectKey> & my_elem_bin =
323  elem_sorter.bin();
324 
325 
326 
327  //-------------------------------------------------------------
328  // (3) get the max value on each processor
329  std::vector<Parallel::DofObjectKey>
330  node_upper_bounds(communicator.size()),
331  elem_upper_bounds(communicator.size());
332 
333  { // limit scope of temporaries
334  std::vector<Parallel::DofObjectKey> recvbuf(2*communicator.size());
335  std::vector<unsigned short int> /* do not use a vector of bools here since it is not always so! */
336  empty_nodes (communicator.size()),
337  empty_elem (communicator.size());
338  std::vector<Parallel::DofObjectKey> my_max(2);
339 
340  communicator.allgather (static_cast<unsigned short int>(my_node_bin.empty()), empty_nodes);
341  communicator.allgather (static_cast<unsigned short int>(my_elem_bin.empty()), empty_elem);
342 
343  if (!my_node_bin.empty()) my_max[0] = my_node_bin.back();
344  if (!my_elem_bin.empty()) my_max[1] = my_elem_bin.back();
345 
346  communicator.allgather (my_max, /* identical_buffer_sizes = */ true);
347 
348  // Be careful here. The *_upper_bounds will be used to find the processor
349  // a given object belongs to. So, if a processor contains no objects (possible!)
350  // then copy the bound from the lower processor id.
351  for (auto p : make_range(communicator.size()))
352  {
353  node_upper_bounds[p] = my_max[2*p+0];
354  elem_upper_bounds[p] = my_max[2*p+1];
355 
356  if (p > 0) // default hilbert index value is the OK upper bound for processor 0.
357  {
358  if (empty_nodes[p]) node_upper_bounds[p] = node_upper_bounds[p-1];
359  if (empty_elem[p]) elem_upper_bounds[p] = elem_upper_bounds[p-1];
360  }
361  }
362  }
363 
364 
365 
366  //-------------------------------------------------------------
367  // (4) determine the position in the global ranking for
368  // each local object
369  {
370  //----------------------------------------------
371  // Nodes first -- all nodes, not just local ones
372  {
373  // Request sets to send to each processor
374  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
375  requested_ids;
376  // Results to gather from each processor - kept in a map so we
377  // do only one loop over nodes after all receives are done.
378  std::map<dof_id_type, std::vector<dof_id_type>>
379  filled_request;
380 
381  // build up list of requests
382  for (const auto & node : mesh.node_ptr_range())
383  {
384  libmesh_assert(node);
385  const Parallel::DofObjectKey hi =
386  get_dofobject_key (*node, bbox, bboxinv);
387  const processor_id_type pid =
388  cast_int<processor_id_type>
389  (std::distance (node_upper_bounds.begin(),
390  std::lower_bound(node_upper_bounds.begin(),
391  node_upper_bounds.end(),
392  hi)));
393 
394  libmesh_assert_less (pid, communicator.size());
395 
396  requested_ids[pid].push_back(hi);
397  }
398 
399  // The number of objects in my_node_bin on each processor
400  std::vector<dof_id_type> node_bin_sizes(communicator.size());
401  communicator.allgather (static_cast<dof_id_type>(my_node_bin.size()), node_bin_sizes);
402 
403  // The offset of my first global index
404  dof_id_type my_offset = 0;
405  for (auto pid : make_range(communicator.rank()))
406  my_offset += node_bin_sizes[pid];
407 
408  auto gather_functor =
409  [
410 #ifndef NDEBUG
411  & node_upper_bounds,
412  & communicator,
413 #endif
414  & my_node_bin,
415  my_offset
416  ]
418  const std::vector<Parallel::DofObjectKey> & keys,
419  std::vector<dof_id_type> & global_ids)
420  {
421  // Fill the requests
422  const std::size_t keys_size = keys.size();
423  global_ids.reserve(keys_size);
424  for (std::size_t idx=0; idx != keys_size; idx++)
425  {
426  const Parallel::DofObjectKey & hi = keys[idx];
427  libmesh_assert_less_equal (hi, node_upper_bounds[communicator.rank()]);
428 
429  // find the requested index in my node bin
430  std::vector<Parallel::DofObjectKey>::const_iterator pos =
431  std::lower_bound (my_node_bin.begin(), my_node_bin.end(), hi);
432  libmesh_assert (pos != my_node_bin.end());
433  libmesh_assert_equal_to (*pos, hi);
434 
435  // Finally, assign the global index based off the position of the index
436  // in my array, properly offset.
437  global_ids.push_back(cast_int<dof_id_type>(std::distance(my_node_bin.begin(), pos) + my_offset));
438  }
439  };
440 
441  auto action_functor =
442  [&filled_request]
443  (processor_id_type pid,
444  const std::vector<Parallel::DofObjectKey> &,
445  const std::vector<dof_id_type> & global_ids)
446  {
447  filled_request[pid] = global_ids;
448  };
449 
450  // Trade requests with other processors
451  const dof_id_type * ex = nullptr;
452  Parallel::pull_parallel_vector_data
453  (communicator, requested_ids, gather_functor, action_functor, ex);
454 
455  // We now have all the filled requests, so we can loop through our
456  // nodes once and assign the global index to each one.
457  {
458  std::map<dof_id_type, std::vector<dof_id_type>::const_iterator>
459  next_obj_on_proc;
460  for (auto & p : filled_request)
461  next_obj_on_proc[p.first] = p.second.begin();
462 
463  for (auto & node : mesh.node_ptr_range())
464  {
465  libmesh_assert(node);
466  const Parallel::DofObjectKey hi =
467  get_dofobject_key (*node, bbox, bboxinv);
468  const processor_id_type pid =
469  cast_int<processor_id_type>
470  (std::distance (node_upper_bounds.begin(),
471  std::lower_bound(node_upper_bounds.begin(),
472  node_upper_bounds.end(),
473  hi)));
474 
475  libmesh_assert_less (pid, communicator.size());
476  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
477 
478  const dof_id_type global_index = *next_obj_on_proc[pid];
479  libmesh_assert_less (global_index, mesh.n_nodes());
480  node->set_id() = global_index;
481 
482  ++next_obj_on_proc[pid];
483  }
484  }
485  }
486 
487  //---------------------------------------------------
488  // elements next -- all elements, not just local ones
489  {
490  // Request sets to send to each processor
491  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
492  requested_ids;
493  // Results to gather from each processor - kept in a map so we
494  // do only one loop over elements after all receives are done.
495  std::map<dof_id_type, std::vector<dof_id_type>>
496  filled_request;
497 
498  for (const auto & elem : mesh.element_ptr_range())
499  {
500  libmesh_assert(elem);
501  const Parallel::DofObjectKey hi =
502  get_dofobject_key (*elem, bbox, bboxinv);
503  const processor_id_type pid =
504  cast_int<processor_id_type>
505  (std::distance (elem_upper_bounds.begin(),
506  std::lower_bound(elem_upper_bounds.begin(),
507  elem_upper_bounds.end(),
508  hi)));
509 
510  libmesh_assert_less (pid, communicator.size());
511 
512  requested_ids[pid].push_back(hi);
513  }
514 
515  // The number of objects in my_elem_bin on each processor
516  std::vector<dof_id_type> elem_bin_sizes(communicator.size());
517  communicator.allgather (static_cast<dof_id_type>(my_elem_bin.size()), elem_bin_sizes);
518 
519  // The offset of my first global index
520  dof_id_type my_offset = 0;
521  for (auto pid : make_range(communicator.rank()))
522  my_offset += elem_bin_sizes[pid];
523 
524  auto gather_functor =
525  [
526 #ifndef NDEBUG
527  & elem_upper_bounds,
528  & communicator,
529 #endif
530  & my_elem_bin,
531  my_offset
532  ]
534  const std::vector<Parallel::DofObjectKey> & keys,
535  std::vector<dof_id_type> & global_ids)
536  {
537  // Fill the requests
538  const std::size_t keys_size = keys.size();
539  global_ids.reserve(keys_size);
540  for (std::size_t idx=0; idx != keys_size; idx++)
541  {
542  const Parallel::DofObjectKey & hi = keys[idx];
543  libmesh_assert_less_equal (hi, elem_upper_bounds[communicator.rank()]);
544 
545  // find the requested index in my elem bin
546  std::vector<Parallel::DofObjectKey>::const_iterator pos =
547  std::lower_bound (my_elem_bin.begin(), my_elem_bin.end(), hi);
548  libmesh_assert (pos != my_elem_bin.end());
549  libmesh_assert_equal_to (*pos, hi);
550 
551  // Finally, assign the global index based off the position of the index
552  // in my array, properly offset.
553  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_elem_bin.begin(), pos) + my_offset));
554  }
555  };
556 
557  auto action_functor =
558  [&filled_request]
559  (processor_id_type pid,
560  const std::vector<Parallel::DofObjectKey> &,
561  const std::vector<dof_id_type> & global_ids)
562  {
563  filled_request[pid] = global_ids;
564  };
565 
566  // Trade requests with other processors
567  const dof_id_type * ex = nullptr;
568  Parallel::pull_parallel_vector_data
569  (communicator, requested_ids, gather_functor, action_functor, ex);
570 
571  // We now have all the filled requests, so we can loop through our
572  // elements once and assign the global index to each one.
573  {
574  std::vector<std::vector<dof_id_type>::const_iterator>
575  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
576  for (auto pid : make_range(communicator.size()))
577  next_obj_on_proc.push_back(filled_request[pid].begin());
578 
579  for (auto & elem : mesh.element_ptr_range())
580  {
581  libmesh_assert(elem);
582  const Parallel::DofObjectKey hi =
583  get_dofobject_key (*elem, bbox, bboxinv);
584  const processor_id_type pid =
585  cast_int<processor_id_type>
586  (std::distance (elem_upper_bounds.begin(),
587  std::lower_bound(elem_upper_bounds.begin(),
588  elem_upper_bounds.end(),
589  hi)));
590 
591  libmesh_assert_less (pid, communicator.size());
592  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
593 
594  const dof_id_type global_index = *next_obj_on_proc[pid];
595  libmesh_assert_less (global_index, mesh.n_elem());
596  elem->set_id() = global_index;
597 
598  ++next_obj_on_proc[pid];
599  }
600  }
601  }
602  }
603 }
void libmesh_assert_equal_points(const MeshBase &mesh)
A function for testing that node locations match across processors.
Definition: mesh_tools.C:1474
void parallel_for(const Range &range, const Body &body)
Execute the provided function object in parallel on the specified range.
Definition: threads_none.h:73
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
MeshBase & mesh
const Parallel::Communicator & comm() const
The StoredRange class defines a contiguous, divisible set of objects.
Definition: stored_range.h:54
Real distance(const Point &p)
uint8_t processor_id_type
Definition: id_types.h:104
uint8_t processor_id_type
libmesh_assert(ctx)
Defines a Cartesian bounding box by the two corner extremum.
Definition: bounding_box.h:40
void libmesh_assert_equal_connectivity(const MeshBase &mesh)
A function for testing that element nodal connectivities match across processors. ...
Definition: mesh_tools.C:1490
The parallel sorting method is templated on the type of data which is to be sorted.
Definition: parallel_sort.h:55
DIE A HORRIBLE DEATH HERE typedef MPI_Comm communicator
IntRange< T > make_range(T beg, T end)
The 2-parameter make_range() helper function returns an IntRange<T> when both input parameters are of...
Definition: int_range.h:134
libMesh::BoundingBox create_nodal_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:583
virtual dof_id_type n_elem() const =0
A Point defines a location in LIBMESH_DIM dimensional Real space.
Definition: point.h:39
virtual dof_id_type n_nodes() const =0
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
A useful inline function which replaces the macros used previously.
uint8_t dof_id_type
Definition: id_types.h:67

◆ broadcast()

void libMesh::MeshCommunication::broadcast ( MeshBase mesh) const

Finds all the processors that may contain elements that neighbor my elements.

This list is guaranteed to include all processors that border any of my elements, but may include additional ones as well. This method computes bounding boxes for the elements on each processor and checks for overlaps. This method takes a mesh (which is assumed to reside on processor 0) and broadcasts it to all the other processors. It also broadcasts any boundary information the mesh has associated with it.

Definition at line 1015 of file mesh_communication.C.

Referenced by main(), libMesh::NameBasedIO::read(), libMesh::CheckpointIO::read(), ExodusTest< elem_type >::test_read_gold(), ExodusTest< elem_type >::test_write(), MeshInputTest::testAbaqusRead(), MeshInputTest::testCopyElementSolutionImpl(), MeshInputTest::testCopyElementVectorImpl(), MeshInputTest::testCopyNodalSolutionImpl(), MeshInputTest::testDynaFileMappings(), MeshInputTest::testDynaNoSplines(), MeshInputTest::testDynaReadElem(), MeshInputTest::testDynaReadPatch(), MeshInputTest::testExodusFileMappings(), MeshInputTest::testExodusIGASidesets(), MeshInputTest::testExodusWriteElementDataFromDiscontinuousNodalData(), MeshInputTest::testLowOrderEdgeBlocks(), and MeshInputTest::testTetgenIO().

1016 {
1017  // no MPI == one processor, no need for this method...
1018  return;
1019 }

◆ check_for_duplicate_global_indices()

void libMesh::MeshCommunication::check_for_duplicate_global_indices ( MeshBase mesh) const

Throw an error if we have any index clashes in the numbering used by assign_global_indices.

Definition at line 612 of file mesh_communication_global_indices.C.

References libMesh::MeshTools::create_nodal_bounding_box(), libMesh::err, mesh, and libMesh::Threads::parallel_for().

613 {
614  LOG_SCOPE ("check_for_duplicate_global_indices()", "MeshCommunication");
615 
616  // Global bounding box. We choose the nodal bounding box for
617  // backwards compatibility; the element bounding box may be looser
618  // on curved elements.
619  const BoundingBox bbox =
621 
622  const Point bboxinv = invert_bbox(bbox);
623 
624  std::vector<Parallel::DofObjectKey>
625  node_keys, elem_keys;
626 
627  {
628  // Nodes first
629  {
630  ConstNodeRange nr (mesh.local_nodes_begin(),
631  mesh.local_nodes_end());
632  node_keys.resize (nr.size());
633  Threads::parallel_for (nr, ComputeHilbertKeys (bbox, bboxinv, node_keys));
634 
635  // It's O(N^2) to check that these keys don't duplicate before the
636  // sort...
637  MeshBase::const_node_iterator nodei = mesh.local_nodes_begin();
638  for (std::size_t i = 0; i != node_keys.size(); ++i, ++nodei)
639  {
640  MeshBase::const_node_iterator nodej = mesh.local_nodes_begin();
641  for (std::size_t j = 0; j != i; ++j, ++nodej)
642  {
643  if (node_keys[i] == node_keys[j])
644  {
645  CFixBitVec icoords[3], jcoords[3];
646  get_hilbert_coords(**nodej, bbox, bboxinv, jcoords);
647  libMesh::err <<
648  "node " << (*nodej)->id() << ", " <<
649  *(const Point *)(*nodej) << " has HilbertIndices " <<
650  node_keys[j] << std::endl;
651  get_hilbert_coords(**nodei, bbox, bboxinv, icoords);
652  libMesh::err <<
653  "node " << (*nodei)->id() << ", " <<
654  *(const Point *)(*nodei) << " has HilbertIndices " <<
655  node_keys[i] << std::endl;
656  libmesh_error_msg("Error: nodes with duplicate Hilbert keys!");
657  }
658  }
659  }
660  }
661 
662  // Elements next
663  {
664  ConstElemRange er (mesh.local_elements_begin(),
665  mesh.local_elements_end());
666  elem_keys.resize (er.size());
667  Threads::parallel_for (er, ComputeHilbertKeys (bbox, bboxinv, elem_keys));
668 
669  // For elements, the keys can be (and in the case of TRI, are
670  // expected to be) duplicates, but only if the elements are at
671  // different levels
672  MeshBase::const_element_iterator elemi = mesh.local_elements_begin();
673  for (std::size_t i = 0; i != elem_keys.size(); ++i, ++elemi)
674  {
675  MeshBase::const_element_iterator elemj = mesh.local_elements_begin();
676  for (std::size_t j = 0; j != i; ++j, ++elemj)
677  {
678  if ((elem_keys[i] == elem_keys[j]) &&
679  ((*elemi)->level() == (*elemj)->level()))
680  {
681  libMesh::err <<
682  "level " << (*elemj)->level() << " elem\n" <<
683  (**elemj) << " vertex average " <<
684  (*elemj)->vertex_average() << " has HilbertIndices " <<
685  elem_keys[j] << " or " <<
686  get_dofobject_key((**elemj), bbox, bboxinv) <<
687  std::endl;
688  libMesh::err <<
689  "level " << (*elemi)->level() << " elem\n" <<
690  (**elemi) << " vertex average " <<
691  (*elemi)->vertex_average() << " has HilbertIndices " <<
692  elem_keys[i] << " or " <<
693  get_dofobject_key((**elemi), bbox, bboxinv) <<
694  std::endl;
695  libmesh_error_msg("Error: level " << (*elemi)->level() << " elements with duplicate Hilbert keys!");
696  }
697  }
698  }
699  }
700  } // done checking Hilbert keys
701 }
OStreamProxy err
void parallel_for(const Range &range, const Body &body)
Execute the provided function object in parallel on the specified range.
Definition: threads_none.h:73
The definition of the const_element_iterator struct.
Definition: mesh_base.h:2121
MeshBase & mesh
The StoredRange class defines a contiguous, divisible set of objects.
Definition: stored_range.h:54
Defines a Cartesian bounding box by the two corner extremum.
Definition: bounding_box.h:40
The definition of the const_node_iterator struct.
Definition: mesh_base.h:2172
libMesh::BoundingBox create_nodal_bounding_box(const MeshBase &mesh)
Definition: mesh_tools.C:583
A Point defines a location in LIBMESH_DIM dimensional Real space.
Definition: point.h:39

◆ clear()

void libMesh::MeshCommunication::clear ( )

Clears all data structures and resets to a pristine state.

Definition at line 330 of file mesh_communication.C.

331 {
332  // _neighboring_processors.clear();
333 }

◆ delete_remote_elements()

void libMesh::MeshCommunication::delete_remote_elements ( DistributedMesh mesh,
const std::set< Elem *> &  extra_ghost_elem_ids 
) const

This method takes an input DistributedMesh which may be distributed among all the processors.

Each processor deletes all elements which are neither local elements nor "ghost" elements which touch local elements, and deletes all nodes which are not contained in local or ghost elements. The end result is that a previously serial DistributedMesh will be distributed between processors. Since this method is collective it must be called by all processors.

The std::set is a list of extra elements that you don't want to delete. These will be left on the current processor along with local elements and ghosted neighbors.

Definition at line 1979 of file mesh_communication.C.

References libMesh::Elem::active_family_tree(), libMesh::as_range(), libMesh::MeshBase::clear_point_locator(), libMesh::ParallelObject::comm(), libMesh::connect_children(), libMesh::connect_families(), libMesh::MeshBase::delete_elem(), libMesh::MeshBase::delete_node(), libMesh::MeshBase::get_constraint_rows(), libMesh::MeshBase::ghosting_functors_begin(), libMesh::MeshBase::ghosting_functors_end(), libMesh::DofObject::id(), libMesh::DofObject::invalid_processor_id, libMesh::MeshBase::is_serial(), libMesh::libmesh_assert(), libMesh::MeshTools::libmesh_assert_valid_constraint_rows(), libMesh::MeshTools::libmesh_assert_valid_refinement_tree(), libMesh::Elem::make_links_to_me_remote(), libMesh::MeshBase::max_elem_id(), libMesh::MeshBase::max_node_id(), mesh, libMesh::MeshTools::n_levels(), libMesh::MeshBase::node_ptr(), libMesh::ParallelObject::processor_id(), libMesh::DofObject::processor_id(), libMesh::query_ghosting_functors(), libMesh::reconnect_nodes(), libMesh::Elem::subactive(), and TIMPI::Communicator::verify().

1981 {
1982  // The mesh should know it's about to be parallelized
1984 
1985  LOG_SCOPE("delete_remote_elements()", "MeshCommunication");
1986 
1987 #ifdef DEBUG
1988  // We expect maximum ids to be in sync so we can use them to size
1989  // vectors
1992  const dof_id_type par_max_node_id = mesh.parallel_max_node_id();
1993  const dof_id_type par_max_elem_id = mesh.parallel_max_elem_id();
1994  libmesh_assert_equal_to (par_max_node_id, mesh.max_node_id());
1995  libmesh_assert_equal_to (par_max_elem_id, mesh.max_elem_id());
1996 #endif
1997 
1998  std::set<const Elem *, CompareElemIdsByLevel> elements_to_keep;
1999 
2000  // Don't delete elements that we were explicitly told not to
2001  for (const auto & elem : extra_ghost_elem_ids)
2002  {
2003  std::vector<const Elem *> active_family;
2004 #ifdef LIBMESH_ENABLE_AMR
2005  if (!elem->subactive())
2006  elem->active_family_tree(active_family);
2007  else
2008 #endif
2009  active_family.push_back(elem);
2010 
2011  for (const auto & f : active_family)
2012  elements_to_keep.insert(f);
2013  }
2014 
2015  // See which elements we still need to keep ghosted, given that
2016  // we're keeping local and unpartitioned elements.
2018  (mesh, mesh.processor_id(),
2019  mesh.active_pid_elements_begin(mesh.processor_id()),
2020  mesh.active_pid_elements_end(mesh.processor_id()),
2021  elements_to_keep);
2024  mesh.active_pid_elements_begin(DofObject::invalid_processor_id),
2025  mesh.active_pid_elements_end(DofObject::invalid_processor_id),
2026  elements_to_keep);
2027 
2028  // The inactive elements we need to send should have their
2029  // immediate children present.
2030  connect_children(mesh, mesh.pid_elements_begin(mesh.processor_id()),
2031  mesh.pid_elements_end(mesh.processor_id()),
2032  elements_to_keep);
2034  mesh.pid_elements_begin(DofObject::invalid_processor_id),
2035  mesh.pid_elements_end(DofObject::invalid_processor_id),
2036  elements_to_keep);
2037 
2038  // The elements we need should have their ancestors, their
2039  // interior_parent links, and their subactive children present too.
2040  // If the mesh has any constraint rows, then elements with
2041  // constrained nodes need elements with constraining nodes to remain
2042  // present.
2043  connect_families(elements_to_keep, &mesh);
2044 
2045  // Don't delete nodes that our semilocal elements need
2046  connected_node_set_type connected_nodes;
2047  reconnect_nodes(elements_to_keep, connected_nodes);
2048 
2049  // Delete all the elements we have no reason to save,
2050  // starting with the most refined so that the mesh
2051  // is valid at all intermediate steps
2052  unsigned int n_levels = MeshTools::n_levels(mesh);
2053 
2054  for (int l = n_levels - 1; l >= 0; --l)
2055  for (auto & elem : as_range(mesh.level_elements_begin(l),
2056  mesh.level_elements_end(l)))
2057  {
2058  libmesh_assert (elem);
2059  // Make sure we don't leave any invalid pointers
2060  const bool keep_me = elements_to_keep.count(elem);
2061 
2062  if (!keep_me)
2063  elem->make_links_to_me_remote();
2064 
2065  // delete_elem doesn't currently invalidate element
2066  // iterators... that had better not change
2067  if (!keep_me)
2068  mesh.delete_elem(elem);
2069  }
2070 
2071  // Delete all the nodes we have no reason to save
2072  for (auto & node : mesh.node_ptr_range())
2073  {
2074  libmesh_assert(node);
2075  if (!connected_nodes.count(node))
2076  mesh.delete_node(node);
2077  }
2078 
2079  // If we had a point locator, it's invalid now that some of the
2080  // elements it pointed to have been deleted.
2082 
2083  // Many of our constraint rows may have been for non-local parts of
2084  // the mesh, which we don't need, and which we didn't specifically
2085  // save dependencies for. The mesh deleted rows for remote nodes
2086  // when we deleted those, but let's delete rows for ghosted nodes.
2087  //
2088  // We can't do erasure inside a range for
2089  for (auto it = mesh.get_constraint_rows().begin(),
2090  end = mesh.get_constraint_rows().end();
2091  it != end;)
2092  {
2093  auto & row = *it;
2094  const Node * node = row.first;
2095  libmesh_assert(node == mesh.node_ptr(node->id()));
2096 
2097  if (node->processor_id() != mesh.processor_id())
2098  mesh.get_constraint_rows().erase(it++);
2099  else
2100  ++it;
2101  }
2102 
2103  // We now have all remote elements and nodes deleted; our ghosting
2104  // functors should be ready to delete any now-redundant cached data
2105  // they use too.
2107  gf->delete_remote_elements();
2108 
2109 #ifdef DEBUG
2112 #endif
2113 }
bool verify(const T &r) const
constraint_rows_type & get_constraint_rows()
Definition: mesh_base.h:1683
A Node is like a Point, but with more information.
Definition: node.h:52
void libmesh_assert_valid_refinement_tree(const MeshBase &mesh)
A function for verifying that elements on this processor have valid descendants and consistent active...
Definition: mesh_tools.C:1416
MeshBase & mesh
void libmesh_assert_valid_constraint_rows(const MeshBase &mesh)
A function for verifying that all mesh constraint rows express relations between nodes and elements t...
Definition: mesh_tools.C:1249
const Parallel::Communicator & comm() const
virtual bool is_serial() const
Definition: mesh_base.h:205
virtual void delete_elem(Elem *e)=0
Removes element e from the mesh.
dof_id_type id() const
Definition: dof_object.h:823
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:488
SimpleRange< IndexType > as_range(const std::pair< IndexType, IndexType > &p)
Helper function that allows us to treat a homogenous pair as a range.
Definition: simple_range.h:57
unsigned int n_levels(const MeshBase &mesh)
Definition: mesh_tools.C:801
virtual dof_id_type max_elem_id() const =0
void clear_point_locator()
Releases the current PointLocator object.
Definition: mesh_base.C:1598
libmesh_assert(ctx)
virtual void delete_node(Node *n)=0
Removes the Node n from the mesh.
void reconnect_nodes(const std::set< const Elem *, CompareElemIdsByLevel > &connected_elements, connected_node_set_type &connected_nodes)
std::set< const Node * > connected_node_set_type
void query_ghosting_functors(const MeshBase &mesh, processor_id_type pid, MeshBase::const_element_iterator elem_it, MeshBase::const_element_iterator elem_end, std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
std::set< GhostingFunctor * >::const_iterator ghosting_functors_begin() const
Beginning of range of ghosting functors.
Definition: mesh_base.h:1268
void connect_families(std::set< const Elem *, CompareElemIdsByLevel > &connected_elements, const MeshBase *mesh=nullptr)
void connect_children(const MeshBase &mesh, MeshBase::const_element_iterator elem_it, MeshBase::const_element_iterator elem_end, std::set< const Elem *, CompareElemIdsByLevel > &connected_elements)
virtual dof_id_type max_node_id() const =0
virtual const Node * node_ptr(const dof_id_type i) const =0
processor_id_type processor_id() const
processor_id_type processor_id() const
Definition: dof_object.h:898
std::set< GhostingFunctor * >::const_iterator ghosting_functors_end() const
End of range of ghosting functors.
Definition: mesh_base.h:1274
uint8_t dof_id_type
Definition: id_types.h:67

◆ find_global_indices()

template<typename ForwardIterator >
void libMesh::MeshCommunication::find_global_indices ( const Parallel::Communicator communicator,
const libMesh::BoundingBox bbox,
const ForwardIterator &  begin,
const ForwardIterator &  end,
std::vector< dof_id_type > &  index_map 
) const

This method determines a globally unique, partition-agnostic index for each object in the input range.

Definition at line 748 of file mesh_communication_global_indices.C.

References libMesh::Parallel::Sort< KeyType, IdxType >::bin(), communicator, distance(), libMesh::MeshTools::Generation::Private::idx(), libMesh::DofObject::invalid_processor_id, libMesh::libmesh_assert(), libMesh::libmesh_ignore(), libMesh::make_range(), libMesh::Real, and libMesh::Parallel::Sort< KeyType, IdxType >::sort().

Referenced by libMesh::ParmetisPartitioner::initialize(), libMesh::MetisPartitioner::partition_range(), and libMesh::Partitioner::partition_unpartitioned_elements().

753 {
754  LOG_SCOPE ("find_global_indices()", "MeshCommunication");
755 
756  // This method determines partition-agnostic global indices
757  // for nodes and elements.
758 
759  // Algorithm:
760  // (1) compute the Hilbert key for each local node/element
761  // (2) perform a parallel sort of the Hilbert key
762  // (3) get the min/max value on each processor
763  // (4) determine the position in the global ranking for
764  // each local object
765  index_map.clear();
766  std::size_t n_objects = std::distance (begin, end);
767  index_map.reserve(n_objects);
768 
769  const Point bboxinv = invert_bbox(bbox);
770 
771  //-------------------------------------------------------------
772  // (1) compute Hilbert keys
773  // These aren't trivial to compute, and we will need them again.
774  // But the binsort will sort the input vector, trashing the order
775  // that we'd like to rely on. So, two vectors...
776  std::vector<Parallel::DofObjectKey>
777  sorted_hilbert_keys,
778  hilbert_keys;
779  sorted_hilbert_keys.reserve(n_objects);
780  hilbert_keys.reserve(n_objects);
781  {
782  LOG_SCOPE("compute_hilbert_indices()", "MeshCommunication");
783  const processor_id_type my_pid = communicator.rank();
784  for (ForwardIterator it=begin; it!=end; ++it)
785  {
786  const Parallel::DofObjectKey hi(get_dofobject_key (**it, bbox, bboxinv));
787  hilbert_keys.push_back(hi);
788 
789  const processor_id_type pid = (*it)->processor_id();
790 
791  if (pid == my_pid)
792  sorted_hilbert_keys.push_back(hi);
793 
794  // someone needs to take care of unpartitioned objects!
795  if ((my_pid == 0) &&
797  sorted_hilbert_keys.push_back(hi);
798  }
799  }
800 
801  //-------------------------------------------------------------
802  // (2) parallel sort the Hilbert keys
803  START_LOG ("parallel_sort()", "MeshCommunication");
804  Parallel::Sort<Parallel::DofObjectKey> sorter (communicator,
805  sorted_hilbert_keys);
806  sorter.sort();
807  STOP_LOG ("parallel_sort()", "MeshCommunication");
808  const std::vector<Parallel::DofObjectKey> & my_bin = sorter.bin();
809 
810  // The number of objects in my_bin on each processor
811  std::vector<unsigned int> bin_sizes(communicator.size());
812  communicator.allgather (static_cast<unsigned int>(my_bin.size()), bin_sizes);
813 
814  // The offset of my first global index
815  unsigned int my_offset = 0;
816  for (auto pid : make_range(communicator.rank()))
817  my_offset += bin_sizes[pid];
818 
819  //-------------------------------------------------------------
820  // (3) get the max value on each processor
821  std::vector<Parallel::DofObjectKey>
822  upper_bounds(1);
823 
824  if (!my_bin.empty())
825  upper_bounds[0] = my_bin.back();
826 
827  communicator.allgather (upper_bounds, /* identical_buffer_sizes = */ true);
828 
829  // Be careful here. The *_upper_bounds will be used to find the processor
830  // a given object belongs to. So, if a processor contains no objects (possible!)
831  // then copy the bound from the lower processor id.
832  for (auto p : IntRange<processor_id_type>(1, communicator.size()))
833  if (!bin_sizes[p]) upper_bounds[p] = upper_bounds[p-1];
834 
835 
836  //-------------------------------------------------------------
837  // (4) determine the position in the global ranking for
838  // each local object
839  {
840  //----------------------------------------------
841  // all objects, not just local ones
842 
843  // Request sets to send to each processor
844  std::map<processor_id_type, std::vector<Parallel::DofObjectKey>>
845  requested_ids;
846  // Results to gather from each processor
847  std::map<processor_id_type, std::vector<dof_id_type>>
848  filled_request;
849 
850  // build up list of requests
851  std::vector<Parallel::DofObjectKey>::const_iterator hi =
852  hilbert_keys.begin();
853 
854  for (ForwardIterator it = begin; it != end; ++it)
855  {
856  libmesh_assert (hi != hilbert_keys.end());
857 
858  std::vector<Parallel::DofObjectKey>::iterator lb =
859  std::lower_bound(upper_bounds.begin(), upper_bounds.end(),
860  *hi);
861 
862  const processor_id_type pid =
863  cast_int<processor_id_type>
864  (std::distance (upper_bounds.begin(), lb));
865 
866  libmesh_assert_less (pid, communicator.size());
867 
868  requested_ids[pid].push_back(*hi);
869 
870  ++hi;
871  // go ahead and put pid in index_map, that way we
872  // don't have to repeat the std::lower_bound()
873  index_map.push_back(pid);
874  }
875 
876  auto gather_functor =
877  [
878 #ifndef NDEBUG
879  & upper_bounds,
880  & communicator,
881 #endif
882  & bbox,
883  & my_bin,
884  my_offset
885  ]
886  (processor_id_type, const std::vector<Parallel::DofObjectKey> & keys,
887  std::vector<dof_id_type> & global_ids)
888  {
889  // Ignore unused lambda capture warnings in devel mode
890  libmesh_ignore(bbox);
891 
892  // Fill the requests
893  const std::size_t keys_size = keys.size();
894  global_ids.clear();
895  global_ids.reserve(keys_size);
896  for (std::size_t idx=0; idx != keys_size; idx++)
897  {
898  const Parallel::DofObjectKey & hilbert_indices = keys[idx];
899  libmesh_assert_less_equal (hilbert_indices, upper_bounds[communicator.rank()]);
900 
901  // find the requested index in my node bin
902  std::vector<Parallel::DofObjectKey>::const_iterator pos =
903  std::lower_bound (my_bin.begin(), my_bin.end(), hilbert_indices);
904  libmesh_assert (pos != my_bin.end());
905 #ifdef DEBUG
906  // If we could not find the requested Hilbert index in
907  // my_bin, something went terribly wrong, possibly the
908  // Mesh was displaced differently on different processors,
909  // and therefore the Hilbert indices don't agree.
910  if (*pos != hilbert_indices)
911  {
912  // The input will be hilbert_indices. We convert it
913  // to BitVecType using the operator= provided by the
914  // BitVecType class. BitVecType is a CBigBitVec!
915  Hilbert::BitVecType input;
916 #ifdef LIBMESH_ENABLE_UNIQUE_ID
917  input = hilbert_indices.first;
918 #else
919  input = hilbert_indices;
920 #endif
921 
922  // Get output in a vector of CBigBitVec
923  std::vector<CBigBitVec> output(3);
924 
925  // Call the indexToCoords function
926  Hilbert::indexToCoords(output.data(), 8*sizeof(Hilbert::inttype), 3, input);
927 
928  // The entries in the output racks are integers in the
929  // range [0, Hilbert::inttype::max] which can be
930  // converted to floating point values in [0,1] and
931  // finally to actual values using the bounding box.
932  const Real max_int_as_real =
933  static_cast<Real>(std::numeric_limits<Hilbert::inttype>::max());
934 
935  // Get the points in [0,1]^3. The zeroth rack of each entry in
936  // 'output' maps to the normalized x, y, and z locations,
937  // respectively.
938  Point p_hat(static_cast<Real>(output[0].racks()[0]) / max_int_as_real,
939  static_cast<Real>(output[1].racks()[0]) / max_int_as_real,
940  static_cast<Real>(output[2].racks()[0]) / max_int_as_real);
941 
942  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
943  Real
944  xmin = bbox.first(0),
945  xmax = bbox.second(0),
946  ymin = bbox.first(1),
947  ymax = bbox.second(1),
948  zmin = bbox.first(2),
949  zmax = bbox.second(2);
950 
951  // Convert the points from [0,1]^3 to their actual (x,y,z) locations
952  Point p(xmin + (xmax-xmin)*p_hat(0),
953  ymin + (ymax-ymin)*p_hat(1),
954  zmin + (zmax-zmin)*p_hat(2));
955 
956  libmesh_error_msg("Could not find hilbert indices: "
957  << hilbert_indices
958  << " corresponding to point " << p);
959  }
960 #endif
961 
962  // Finally, assign the global index based off the position of the index
963  // in my array, properly offset.
964  global_ids.push_back (cast_int<dof_id_type>(std::distance(my_bin.begin(), pos) + my_offset));
965  }
966  };
967 
968  auto action_functor =
969  [&filled_request]
970  (processor_id_type pid,
971  const std::vector<Parallel::DofObjectKey> &,
972  const std::vector<dof_id_type> & global_ids)
973  {
974  filled_request[pid] = global_ids;
975  };
976 
977  const dof_id_type * ex = nullptr;
978  Parallel::pull_parallel_vector_data
979  (communicator, requested_ids, gather_functor, action_functor, ex);
980 
981  // We now have all the filled requests, so we can loop through our
982  // nodes once and assign the global index to each one.
983  {
984  std::vector<std::vector<dof_id_type>::const_iterator>
985  next_obj_on_proc; next_obj_on_proc.reserve(communicator.size());
986  for (auto pid : make_range(communicator.size()))
987  next_obj_on_proc.push_back(filled_request[pid].begin());
988 
989  unsigned int cnt=0;
990  for (ForwardIterator it = begin; it != end; ++it, cnt++)
991  {
992  const processor_id_type pid = cast_int<processor_id_type>
993  (index_map[cnt]);
994 
995  libmesh_assert_less (pid, communicator.size());
996  libmesh_assert (next_obj_on_proc[pid] != filled_request[pid].end());
997 
998  const dof_id_type global_index = *next_obj_on_proc[pid];
999  index_map[cnt] = global_index;
1000 
1001  ++next_obj_on_proc[pid];
1002  }
1003  }
1004  }
1005 
1006  libmesh_assert_equal_to(index_map.size(), n_objects);
1007 }
The IntRange templated class is intended to make it easy to loop over integers which are indices of a...
Definition: int_range.h:53
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
Real distance(const Point &p)
uint8_t processor_id_type
Definition: id_types.h:104
uint8_t processor_id_type
void libmesh_ignore(const Args &...)
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:488
libmesh_assert(ctx)
DIE A HORRIBLE DEATH HERE typedef LIBMESH_DEFAULT_SCALAR_TYPE Real
The parallel sorting method is templated on the type of data which is to be sorted.
Definition: parallel_sort.h:55
DIE A HORRIBLE DEATH HERE typedef MPI_Comm communicator
IntRange< T > make_range(T beg, T end)
The 2-parameter make_range() helper function returns an IntRange<T> when both input parameters are of...
Definition: int_range.h:134
A Point defines a location in LIBMESH_DIM dimensional Real space.
Definition: point.h:39
unsigned int idx(const ElemType type, const unsigned int nx, const unsigned int i, const unsigned int j)
A useful inline function which replaces the macros used previously.
uint8_t dof_id_type
Definition: id_types.h:67

◆ find_local_indices()

template<typename ForwardIterator >
void libMesh::MeshCommunication::find_local_indices ( const libMesh::BoundingBox bbox,
const ForwardIterator &  begin,
const ForwardIterator &  end,
std::unordered_map< dof_id_type, dof_id_type > &  index_map 
) const

This method determines a locally unique, contiguous index for each object in the input range.

Definition at line 710 of file mesh_communication_global_indices.C.

Referenced by libMesh::Partitioner::_find_global_index_by_pid_map().

714 {
715  LOG_SCOPE ("find_local_indices()", "MeshCommunication");
716 
717  // This method determines id-agnostic local indices
718  // for nodes and elements by sorting Hilbert keys.
719 
720  index_map.clear();
721 
722  const Point bboxinv = invert_bbox(bbox);
723 
724  //-------------------------------------------------------------
725  // (1) compute Hilbert keys
726  // These aren't trivial to compute, and we will need them again.
727  // But the binsort will sort the input vector, trashing the order
728  // that we'd like to rely on. So, two vectors...
729  std::map<Parallel::DofObjectKey, dof_id_type> hilbert_keys;
730  {
731  LOG_SCOPE("local_hilbert_indices", "MeshCommunication");
732  for (ForwardIterator it=begin; it!=end; ++it)
733  {
734  const Parallel::DofObjectKey hi(get_dofobject_key ((**it), bbox, bboxinv));
735  hilbert_keys.emplace(hi, (*it)->id());
736  }
737  }
738 
739  {
740  dof_id_type cnt = 0;
741  for (auto key_val : hilbert_keys)
742  index_map[key_val.second] = cnt++;
743  }
744 }
std::pair< Hilbert::HilbertIndices, unique_id_type > DofObjectKey
A Point defines a location in LIBMESH_DIM dimensional Real space.
Definition: point.h:39
uint8_t dof_id_type
Definition: id_types.h:67

◆ gather()

void libMesh::MeshCommunication::gather ( const processor_id_type  root_id,
MeshBase mesh 
) const

This method takes an input DistributedMesh which may be distributed among all the processors.

Each processor then sends its local nodes and elements to processor root_id. The end result is that a previously distributed DistributedMesh will be serialized on processor root_id. Since this method is collective it must be called by all processors. For the special case of root_id equal to DofObject::invalid_processor_id this function performs an allgather.

Definition at line 1153 of file mesh_communication.C.

Referenced by allgather().

1154 {
1155  // no MPI == one processor, no need for this method...
1156  return;
1157 }

◆ gather_neighboring_elements()

void libMesh::MeshCommunication::gather_neighboring_elements ( DistributedMesh mesh) const

Definition at line 513 of file mesh_communication.C.

Referenced by libMesh::Nemesis_IO::read().

514 {
515  // no MPI == one processor, no need for this method...
516  return;
517 }

◆ make_elems_parallel_consistent()

void libMesh::MeshCommunication::make_elems_parallel_consistent ( MeshBase mesh)

Copy ids of ghost elements from their local processors.

Definition at line 1634 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), mesh, libMesh::MeshBase::query_elem_ptr(), libMesh::MeshBase::renumber_elem(), libMesh::Parallel::sync_dofobject_data_by_id(), and libMesh::Parallel::sync_element_data_by_parent_id().

Referenced by libMesh::MeshRefinement::_refine_elements().

1635 {
1636  // This function must be run on all processors at once
1637  libmesh_parallel_only(mesh.comm());
1638 
1639  LOG_SCOPE ("make_elems_parallel_consistent()", "MeshCommunication");
1640 
1641  SyncIds syncids(mesh, &MeshBase::renumber_elem);
1643  (mesh, mesh.active_elements_begin(),
1644  mesh.active_elements_end(), syncids);
1645 
1646 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1647  SyncUniqueIds<Elem> syncuniqueids(mesh, &MeshBase::query_elem_ptr);
1649  (mesh.comm(), mesh.active_elements_begin(),
1650  mesh.active_elements_end(), syncuniqueids);
1651 #endif
1652 }
MeshBase & mesh
const Parallel::Communicator & comm() const
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.
virtual const Elem * query_elem_ptr(const dof_id_type i) const =0
virtual void renumber_elem(dof_id_type old_id, dof_id_type new_id)=0
Changes the id of element old_id, both by changing elem(old_id)->id() and by moving elem(old_id) in t...
void sync_element_data_by_parent_id(MeshBase &mesh, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost elements uniquely identified by their parent id and which child t...

◆ make_new_node_proc_ids_parallel_consistent()

void libMesh::MeshCommunication::make_new_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well.

Definition at line 1814 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::DofObject::invalid_processor_id, libMesh::MeshTools::libmesh_assert_parallel_consistent_new_node_procids(), libMesh::MeshTools::libmesh_assert_parallel_consistent_procids< Node >(), mesh, libMesh::Elem::node_ref(), libMesh::Elem::node_ref_range(), libMesh::DofObject::processor_id(), libMesh::Parallel::sync_node_data_by_element_id(), and libMesh::Parallel::sync_node_data_by_element_id_once().

Referenced by make_new_nodes_parallel_consistent().

1815 {
1816  LOG_SCOPE ("make_new_node_proc_ids_parallel_consistent()", "MeshCommunication");
1817 
1818  // This function must be run on all processors at once
1819  libmesh_parallel_only(mesh.comm());
1820 
1821  // When this function is called, each section of a parallelized mesh
1822  // should be in the following state:
1823  //
1824  // Local nodes should have unique authoritative ids,
1825  // and new nodes should be unpartitioned.
1826  //
1827  // New ghost nodes touching local elements should be unpartitioned.
1828 
1829  // We may not have consistent processor ids for new nodes (because a
1830  // node may be old and partitioned on one processor but new and
1831  // unpartitioned on another) when we start
1832 #ifdef DEBUG
1834  // MeshTools::libmesh_assert_parallel_consistent_new_node_procids(mesh);
1835 #endif
1836 
1837  // We have two kinds of new nodes. *NEW* nodes are unpartitioned on
1838  // all processors: we need to use a id-independent (i.e. dumb)
1839  // heuristic to partition them. But "new" nodes are newly created
1840  // on some processors (when ghost elements are refined) yet
1841  // correspond to existing nodes on other processors: we need to use
1842  // the existing processor id for them.
1843  //
1844  // A node which is "new" on one processor will be associated with at
1845  // least one ghost element, and we can just query that ghost
1846  // element's owner to find out the correct processor id.
1847 
1848  auto node_unpartitioned =
1849  [](const Elem * elem, unsigned int local_node_num)
1850  { return elem->node_ref(local_node_num).processor_id() ==
1852 
1853  SyncProcIds sync(mesh);
1854 
1856  (mesh, mesh.not_local_elements_begin(),
1857  mesh.not_local_elements_end(), Parallel::SyncEverything(),
1858  node_unpartitioned, sync);
1859 
1860  // Nodes should now be unpartitioned iff they are truly new; those
1861  // are the *only* nodes we will touch.
1862 #ifdef DEBUG
1864 #endif
1865 
1866  NodeWasNew node_was_new(mesh);
1867 
1868  // Set the lowest processor id we can on truly new nodes
1869  for (auto & elem : mesh.element_ptr_range())
1870  for (auto & node : elem->node_ref_range())
1871  if (node_was_new.was_new.count(&node))
1872  {
1873  processor_id_type & pid = node.processor_id();
1874  pid = std::min(pid, elem->processor_id());
1875  }
1876 
1877  // Then finally see if other processors have a lower option
1879  (mesh, mesh.elements_begin(), mesh.elements_end(),
1880  ElemNodesMaybeNew(), node_was_new, sync);
1881 
1882  // We should have consistent processor ids when we're done.
1883 #ifdef DEBUG
1886 #endif
1887 }
This is the base class from which all geometric element types are derived.
Definition: elem.h:94
MeshBase & mesh
void libmesh_assert_parallel_consistent_new_node_procids(const MeshBase &mesh)
A function for verifying that processor assignment is parallel consistent (every processor agrees on ...
Definition: mesh_tools.C:1891
const Parallel::Communicator & comm() const
uint8_t processor_id_type
const Node & node_ref(const unsigned int i) const
Definition: elem.h:2353
static const processor_id_type invalid_processor_id
An invalid processor_id to distinguish DoFs that have not been assigned to a processor.
Definition: dof_object.h:488
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Synchronize data about a range of ghost nodes uniquely identified by an element id and local node id...
void libmesh_assert_parallel_consistent_procids< Node >(const MeshBase &mesh)
Definition: mesh_tools.C:1929
SimpleRange< NodeRefIter > node_ref_range()
Returns a range with all nodes of an element, usable in range-based for loops.
Definition: elem.h:2474
bool sync_node_data_by_element_id_once(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Synchronize data about a range of ghost nodes uniquely identified by an element id and local node id...
processor_id_type processor_id() const
Definition: dof_object.h:898

◆ make_new_nodes_parallel_consistent()

void libMesh::MeshCommunication::make_new_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on new nodes from their local processors.

Definition at line 1937 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshTools::correct_node_proc_ids(), make_new_node_proc_ids_parallel_consistent(), make_node_bcids_parallel_consistent(), make_node_ids_parallel_consistent(), make_node_unique_ids_parallel_consistent(), and mesh.

Referenced by libMesh::MeshRefinement::_refine_elements().

1938 {
1939  // This function must be run on all processors at once
1940  libmesh_parallel_only(mesh.comm());
1941 
1942  // When this function is called, each section of a parallelized mesh
1943  // should be in the following state:
1944  //
1945  // All nodes should have the exact same physical location on every
1946  // processor where they exist.
1947  //
1948  // Local nodes should have unique authoritative ids,
1949  // and new nodes should be unpartitioned.
1950  //
1951  // New ghost nodes touching local elements should be unpartitioned.
1952  //
1953  // New ghost nodes should have ids which are either already correct
1954  // or which are in the "unpartitioned" id space.
1955  //
1956  // Non-new nodes should have correct ids and processor ids already.
1957 
1958  // First, let's sync up new nodes' processor ids.
1959 
1961 
1962  // Second, sync up dofobject ids.
1964 
1965  // Third, sync up dofobject unique_ids if applicable.
1967 
1968  // Fourth, sync up any nodal boundary conditions
1970 
1971  // Finally, correct the processor ids to make DofMap happy
1973 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parall...
void correct_node_proc_ids(MeshBase &)
Changes the processor ids on each node so be the same as the id of the lowest element touching that n...
Definition: mesh_tools.C:2219
MeshBase & mesh
const Parallel::Communicator & comm() const
void make_node_bcids_parallel_consistent(MeshBase &)
Assuming all processor ids are parallel consistent, this function makes all ghost boundary ids on nod...
void make_new_node_proc_ids_parallel_consistent(MeshBase &)
Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes processor ids on new nodes on other processors parallel consistent as well.
void make_node_ids_parallel_consistent(MeshBase &)
Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel cons...

◆ make_node_bcids_parallel_consistent()

void libMesh::MeshCommunication::make_node_bcids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids are parallel consistent, this function makes all ghost boundary ids on nodes parallel consistent.

Definition at line 1612 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::libmesh_ignore(), mesh, and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by make_new_nodes_parallel_consistent().

1613 {
1614  // Avoid unused variable warnings if unique ids aren't enabled.
1616 
1617  // This function must be run on all processors at once
1618  libmesh_parallel_only(mesh.comm());
1619 
1620  LOG_SCOPE ("make_node_bcids_parallel_consistent()", "MeshCommunication");
1621 
1622  SyncBCIds<Node> syncbcids(mesh);
1624  mesh.nodes_begin(),
1625  mesh.nodes_end(),
1626  syncbcids);
1627 }
MeshBase & mesh
const Parallel::Communicator & comm() const
void libmesh_ignore(const Args &...)
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.

◆ make_node_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_ids_parallel_consistent ( MeshBase mesh)

Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all other ids parallel consistent.

Definition at line 1562 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshTools::libmesh_assert_topology_consistent_procids< Node >(), mesh, and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by make_new_nodes_parallel_consistent(), and make_nodes_parallel_consistent().

1563 {
1564  // This function must be run on all processors at once
1565  libmesh_parallel_only(mesh.comm());
1566 
1567  // We need to agree on which processor owns every node, but we can't
1568  // easily assert that here because we don't currently agree on which
1569  // id every node has, and some of our temporary ids on unrelated
1570  // nodes will "overlap".
1571 //#ifdef DEBUG
1572 // MeshTools::libmesh_assert_parallel_consistent_procids<Node> (mesh);
1573 //#endif // DEBUG
1574 
1575  LOG_SCOPE ("make_node_ids_parallel_consistent()", "MeshCommunication");
1576 
1577  SyncNodeIds syncids(mesh);
1579  (mesh, mesh.elements_begin(), mesh.elements_end(),
1581 
1582  // At this point, with both ids and processor ids synced, we can
1583  // finally check for topological consistency of node processor ids.
1584 #ifdef DEBUG
1586 #endif
1587 }
void libmesh_assert_topology_consistent_procids< Node >(const MeshBase &mesh)
Definition: mesh_tools.C:1352
MeshBase & mesh
const Parallel::Communicator & comm() const
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Synchronize data about a range of ghost nodes uniquely identified by an element id and local node id...

◆ make_node_proc_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_proc_ids_parallel_consistent ( MeshBase mesh)

Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well.

Definition at line 1785 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), mesh, and libMesh::Parallel::sync_node_data_by_element_id().

Referenced by make_nodes_parallel_consistent().

1786 {
1787  LOG_SCOPE ("make_node_proc_ids_parallel_consistent()", "MeshCommunication");
1788 
1789  // This function must be run on all processors at once
1790  libmesh_parallel_only(mesh.comm());
1791 
1792  // When this function is called, each section of a parallelized mesh
1793  // should be in the following state:
1794  //
1795  // All nodes should have the exact same physical location on every
1796  // processor where they exist.
1797  //
1798  // Local nodes should have unique authoritative ids,
1799  // and processor ids consistent with all processors which own
1800  // an element touching them.
1801  //
1802  // Ghost nodes touching local elements should have processor ids
1803  // consistent with all processors which own an element touching
1804  // them.
1805  SyncProcIds sync(mesh);
1807  (mesh, mesh.elements_begin(), mesh.elements_end(),
1809 }
MeshBase & mesh
const Parallel::Communicator & comm() const
void sync_node_data_by_element_id(MeshBase &mesh, const MeshBase::const_element_iterator &range_begin, const MeshBase::const_element_iterator &range_end, const ElemCheckFunctor &elem_check, const NodeCheckFunctor &node_check, SyncFunctor &sync)
Synchronize data about a range of ghost nodes uniquely identified by an element id and local node id...

◆ make_node_unique_ids_parallel_consistent()

void libMesh::MeshCommunication::make_node_unique_ids_parallel_consistent ( MeshBase mesh)

Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parallel consistent, this function makes all ghost unique_ids parallel consistent.

Definition at line 1591 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::libmesh_ignore(), mesh, libMesh::MeshBase::query_node_ptr(), and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::BoundaryInfo::add_elements(), make_new_nodes_parallel_consistent(), make_nodes_parallel_consistent(), and libMesh::Nemesis_IO::read().

1592 {
1593  // Avoid unused variable warnings if unique ids aren't enabled.
1595 
1596  // This function must be run on all processors at once
1597  libmesh_parallel_only(mesh.comm());
1598 
1599 #ifdef LIBMESH_ENABLE_UNIQUE_ID
1600  LOG_SCOPE ("make_node_unique_ids_parallel_consistent()", "MeshCommunication");
1601 
1602  SyncUniqueIds<Node> syncuniqueids(mesh, &MeshBase::query_node_ptr);
1604  mesh.nodes_begin(),
1605  mesh.nodes_end(),
1606  syncuniqueids);
1607 
1608 #endif
1609 }
MeshBase & mesh
const Parallel::Communicator & comm() const
void libmesh_ignore(const Args &...)
virtual const Node * query_node_ptr(const dof_id_type i) const =0
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.

◆ make_nodes_parallel_consistent()

void libMesh::MeshCommunication::make_nodes_parallel_consistent ( MeshBase mesh)

Copy processor_ids and ids on ghost nodes from their local processors.

This is useful for code which wants to add nodes to a distributed mesh.

Definition at line 1892 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), libMesh::MeshTools::correct_node_proc_ids(), make_node_ids_parallel_consistent(), make_node_proc_ids_parallel_consistent(), make_node_unique_ids_parallel_consistent(), and mesh.

Referenced by libMesh::MeshRefinement::_coarsen_elements(), and libMesh::MeshTools::Modification::all_tri().

1893 {
1894  // This function must be run on all processors at once
1895  libmesh_parallel_only(mesh.comm());
1896 
1897  // When this function is called, each section of a parallelized mesh
1898  // should be in the following state:
1899  //
1900  // Element ids and locations should be consistent on every processor
1901  // where they exist.
1902  //
1903  // All nodes should have the exact same physical location on every
1904  // processor where they exist.
1905  //
1906  // Local nodes should have unique authoritative ids,
1907  // and processor ids consistent with all processors which own
1908  // an element touching them.
1909  //
1910  // Ghost nodes touching local elements should have processor ids
1911  // consistent with all processors which own an element touching
1912  // them.
1913  //
1914  // Ghost nodes should have ids which are either already correct
1915  // or which are in the "unpartitioned" id space.
1916 
1917  // First, let's sync up processor ids. Some of these processor ids
1918  // may be "wrong" from coarsening, but they're right in the sense
1919  // that they'll tell us who has the authoritative dofobject ids for
1920  // each node.
1921 
1923 
1924  // Second, sync up dofobject ids.
1926 
1927  // Third, sync up dofobject unique_ids if applicable.
1929 
1930  // Finally, correct the processor ids to make DofMap happy
1932 }
void make_node_unique_ids_parallel_consistent(MeshBase &)
Assuming all unique_ids on local nodes are globally unique, and assuming all processor ids are parall...
void correct_node_proc_ids(MeshBase &)
Changes the processor ids on each node so be the same as the id of the lowest element touching that n...
Definition: mesh_tools.C:2219
MeshBase & mesh
const Parallel::Communicator & comm() const
void make_node_ids_parallel_consistent(MeshBase &)
Assuming all ids on local nodes are globally unique, and assuming all processor ids are parallel cons...
void make_node_proc_ids_parallel_consistent(MeshBase &)
Assuming all processor ids on nodes touching local elements are parallel consistent, this function makes all other processor ids parallel consistent as well.

◆ make_p_levels_parallel_consistent()

void libMesh::MeshCommunication::make_p_levels_parallel_consistent ( MeshBase mesh)

Copy p levels of ghost elements from their local processors.

Definition at line 1658 of file mesh_communication.C.

References libMesh::ParallelObject::comm(), mesh, and libMesh::Parallel::sync_dofobject_data_by_id().

Referenced by libMesh::MeshRefinement::_coarsen_elements(), and libMesh::MeshRefinement::_refine_elements().

1659 {
1660  // This function must be run on all processors at once
1661  libmesh_parallel_only(mesh.comm());
1662 
1663  LOG_SCOPE ("make_p_levels_parallel_consistent()", "MeshCommunication");
1664 
1665  SyncPLevels syncplevels(mesh);
1667  (mesh.comm(), mesh.elements_begin(), mesh.elements_end(),
1668  syncplevels);
1669 }
MeshBase & mesh
const Parallel::Communicator & comm() const
void sync_dofobject_data_by_id(const Communicator &comm, const Iterator &range_begin, const Iterator &range_end, SyncFunctor &sync)
Request data about a range of ghost dofobjects uniquely identified by their id.

◆ redistribute()

void libMesh::MeshCommunication::redistribute ( DistributedMesh mesh,
bool  newly_coarsened_only = false 
) const

This method takes a parallel distributed mesh and redistributes the elements.

Specifically, any elements stored on a given processor are sent to the processor which "owns" them. Similarly, any elements assigned to the current processor but stored on another are received. Once this step is completed any required ghost elements are updated. The final result is that each processor stores only the elements it actually owns and any ghost elements required to satisfy data dependencies. This method can be invoked after a partitioning step to affect the new partitioning.

Redistribution can also be done with newly coarsened elements' neighbors only.

Definition at line 339 of file mesh_communication.C.

Referenced by libMesh::DistributedMesh::redistribute().

340 {
341  // no MPI == one processor, no redistribution
342  return;
343 }

◆ send_coarse_ghosts()

void libMesh::MeshCommunication::send_coarse_ghosts ( MeshBase mesh) const

Examine a just-coarsened mesh, and for any newly-coarsened elements, send the associated ghosted elements to the processor which needs them.

Definition at line 892 of file mesh_communication.C.

Referenced by libMesh::MeshRefinement::_coarsen_elements().

893 {
894  // no MPI == one processor, no need for this method...
895  return;
896 }

The documentation for this class was generated from the following files: