The only remaining non-discretized parts of the weak form are the integrals.
We split the domain integral into a sum of integrals over elements:
Through a change of variables, the element integrals are mapped to integrals over the "reference" elements .
is the Jacobian of the map from the physical element to the reference element.
To approximate the reference element integrals numerically, we use quadrature (typically "Gaussian Quadrature"):
is the spatial location of the th quadrature point and is its associated weight.
MOOSE handles multiplication by the Jacobian and the weight automatically, thus your
Kernelis only responsible for computing the part of the integrand.
Under certain common situations, the quadrature approximation is exact! - For example, in 1 dimension, Gaussian Quadrature can exactly integrate polynomials of order with quadrature points.
Note that sampling at the quadrature points yields:
And our weak form becomes:
The second sum is over boundary faces, .
Kernelsmust provide each of the terms in square brackets (evaluated at or as necessary).
We now have a nonlinear system of equations, to solve for the coefficients .
Newton's method has good convergence properties, we use it to solve this system of nonlinear equations.
Newton's method is a "root finding" method: it finds zeros of nonlinear equations.
Newton's Method in "Update Form" for finding roots of the scalar equation
We don't have just one scalar equation: we have a system of nonlinear equations.
This leads to the following form of Newton's Method:
Where is the Jacobian matrix evaluated at the current iterate:
Newton for a Simple Equation
Consider the convection-diffusion equation with nonlinear , , and :
The component of the residual vector is:
Using the previously-defined rules for and , the entry of the Jacobian is then:
Note that even for this "simple" equation, the Jacobian entries are nontrivial: they depend on the partial derivatives of , , and , which may be difficult or time-consuming to compute analytically.
In a multiphysics setting with many coupled equations and complicated material properties, the Jacobian might be extremely difficult to determine.
On the previous slide, the term was used, where was a nonlinear forcing function.
The chain rule allows us to write this term as
If a functional form of is known, e.g. , this formula implies that its Jacobian contribution is given by
Jacobian Free Newton Krylov
is a linear system solved during each Newton step.
For simplicity, we can write this linear system as , where: - - -
We employ an iterative Krylov method (e.g. GMRES) to produce a sequence of iterates ,
and remain fixed during the iterative process.
The "linear residual" at step is defined as
MOOSE prints the norm of this vector, , at each iteration, if you set
print_linear_residuals = truein the
The "nonlinear residual" printed by MOOSE is .
By iterate , the Krylov method has constructed the subspace
Different Krylov methods produce the iterates in different ways:
Conjugate Gradients: orthogonal to .
GMRES/MINRES: has minimum norm for in .
Biconjugate Gradients: is orthogonal to
is never explicitly needed to construct the subspace, only the action of on a vector is required.
This action can be approximated by:
This form has many advantages: - No need to do analytic derivatives to form - No time needed to compute (just residual computations) - No space needed to store
The Finite Element Method is a way of numerically approximating the solution of PDEs.
Just like polynomial fitting, FEM finds coefficients for basis functions.
The "solution" is the combination of the coefficients and the basis functions, and the solution can be sampled anywhere in the domain.
We compute integrals numerically using quadrature.
Newton's Method provides a mechanism for solving a system of nonlinear equations.
The Jacobian Free Newton Krylov (JFNK) method allows us to avoid explicitly forming the Jacobian matrix while still computing its "action".