MOOSE Newsletter (July 2019)

MOOSE Workshop

The MOOSE workshop slides have been updated: MOOSE Workshop.

Automatic Scaling

We have added the ability to automatically scale variable blocks in the preconditioning matrix in order to improve the convergence of linear solves. If the user sets automatic_scaling=true in their Executioner block, then MOOSE will run an initial Jacobian calculation using just on-diagonal computeQpJacobian statements from Kernel objects. MOOSE then inspects the resulting matrix and then scales each variable's residual and Jacobian entries in subsequent calculations by the reciprocal of the largest magnitude diagonal element corresponding to that variable. Consequently, the largest diagonal entry for each variable should be of order unity. The user can also elect to do scaling calculations at every time-step by setting compute_scaling_once=false in the Executioner block (by default scaling factors will only be calculated at the initial time-step). This can be useful in a transient simulation where the physics may change substantially over time.

Mortar Contact

Mortar based mechanical contact has been added to the Contact module. A good summary of its performance for a couple test cases is given in the Contact module summary. For one test case, optimized solver options yield 476 cumulative non-linear iterations for mortar-based contact vs. 938 iterations for penalty contact, nearly a factor of two reduction. The current mortar-based contact cases make use of Lagrange multipliers and nonlinear-complementarity functions.

Array variable

A new variable type, namely ArrayMooseVariable. A set of objects making use of it include array kernels, boundary conditions, etc. were also created. An array variable is define as a set of standard field variables with the same finite element family and order. Each standard variable of an array variable is referred to as a component of the array variable.

An array variable can be added as:

[Variables]
  [u]
    order = FIRST
    family = LAGRANGE
    components = 2
  []
[]
(test/tests/kernels/array_kernels/array_diffusion_test.i)

by specifying the number of components with "components" parameter. The default value is one, meaning that the variable is either a standard variable or a vector variable depending on the family.

An array kernel is a MOOSE kernel operating on an array variable and assembles the residuals and Jacobians for all the components of the array variable. A sample array kernel can be found at ArrayDiffusion. Similarly as array kernel, we have array initial conditions, array boundary conditions, array DG kernels, array interface kernels, array constraints, etc. A complete input file with an array variable can be found array_diffusion_test.i. Array variables can be coupled into standard kernels and vice versa as showed in array_diffusion_reaction_coupling.i and array_diffusion_reaction_other_coupling.i

The purpose of having array variables is to reduce the number of kernels, BCs, etc. when the number of components of an array variable is large (potentially hundreds or thousands) and to avoid duplicated operations otherwise with lots of standard kernels. Array variable can be useful for radiation transport where an extra independent direction variable can result into large number of standard variables.

A list of MOOSE objects using array variables:

Application developers can derive new array kernels, boundary conditions, DG kernels, initial conditions with them as templates. Array interface kernels, auxiliary kernels, etc. can be added in the future.

Two simple generic constant materials were also added that can be used in array arithmetic operations:

Robust Race-condition checking in the TestHarness

One of the more common errors in regression test writing is the accidental creation of output file race-conditions when multiple conditions are tested with the same input file or multiple input files writing to the same output file. While the TestHarness has long had the ability to check for common race conditions, we found that users were able to sneak in unexpected race conditions quite frequently. Our MOOSE intern (Samuel Tew) has recently completed a "robust race-condition" checker that is now checked in and is being used on all PRs to MOOSE. The new system works by isolating each test in the same directory and checking the file-system before and after each test runs examining all file system activity. This makes it possible to look at the tests that may run concurrently and cross-examine them with the outputs created by those tests when they run separately looking for conflicts. You will be notified by CIVET when a potential race-condition is detected. You may also run this mode locally by using the -d option to the run_tests script.

TimedPrint

A new utility has been added to MOOSE that will conditionally print progress information in the form of timed dots on the terminal when specific blocks of code take an "extended" period of time. At the completion of the timed block of code, an elapsed time is also printed giving the user information about which activity is consuming non-trivial amounts of time. No information is printed if the section of code executes quickly. Developers may use this capability in blocks of code that are expected to take significant amounts of time. Note: This capability should not be used in frequently called sections of code.


#include "TimedPrint.h"

...

{
  CONSOLE_TIMED_PRINT("Running a time-consuming algorithm");

} // Dots are printed until the closing scope is reached as time elapses

Improved Build Parallelism on GNU Make 4.0+

Makefile rules have been improved so that additionally parallelism is possible when using systems with newer GNU Make installed. Users on many flavors of Linux may notice significant reductions in build times when building applications that use several libraries or modules in MOOSE.

SQA Push

The MOOSE team is making a big push to get all tests documented in the moose/test folder. This activity started in June and will continue through July. As part of this effort, we have extended/improved the syntax available for documenting tests. If you are working on SQA, please see the review the requirement section in Documenting MOOSE.

Supported scalar kernels in Eigen system

The eigenvalue system is supposed to be general to handle systems from different engineering areas. There are some user cases where scalar kernels are employed to represent the physics. We added the new capability to enhance the eigen system to handle scalar kernels. Scalar kernels are computed properly, and an algebraic system of eigen equations is produced. The algebraic system of Eigen equations is passed into SLEPc for computing eigenvalues. Note that users need to tag the right hand side of problems if the targeting application is a generalized eigenvalue problem.

First order L2 Lagrange variable on MultiApp transfer

For elemental variables, MOOSE used to handle the constant monomial when transferring field variables between applications. This was implemented by sampling centroids of elements. An enhancement is added to MOOSE to transfer elemental first-order L2 Lagrange variables with sampling solution values on all element nodes and transferring them between applications. Note that we don't support an elemental variable higher than the first order.