try ai
Popular Science
Edit
Share
Feedback
  • Discontinuous Galerkin (DG) Formulations

Discontinuous Galerkin (DG) Formulations

SciencePediaSciencePedia
Key Takeaways
  • DG methods intentionally allow solutions to be discontinuous across element boundaries, using numerical fluxes to enforce physical conservation laws and couple the elements.
  • The Symmetric Interior Penalty Galerkin (SIPG) method stabilizes the formulation by adding penalty terms that penalize jumps in the solution, ensuring a robust and accurate scheme.
  • The inherent flexibility of DG allows for the use of non-matching grids, variable polynomial degrees (p-adaptivity), and straightforward handling of complex geometries from CAD.
  • DG excels at modeling phenomena with sharp features or high-order physics, such as shock waves in fluids, beam bending in structures, and advanced material models.

Introduction

For decades, numerical simulation has been guided by a quest for seamlessness, building solutions from pieces that must join together perfectly. The standard Finite Element Method, for instance, relies on this enforced continuity, a powerful but often rigid constraint. The Discontinuous Galerkin (DG) method offers a revolutionary alternative by asking a simple question: what if we embrace discontinuity? This article delves into the DG formulation, a framework that trades perfect continuity for unparalleled flexibility, enabling us to model some of the most challenging problems in science and engineering with greater ease and accuracy.

This exploration is divided into two main parts. First, in "Principles and Mechanisms," we will uncover the core philosophy of DG, examining how it uses mathematical tools called numerical fluxes to allow disconnected elements to communicate and enforce physical laws. We will dissect a key DG variant, the Symmetric Interior Penalty Galerkin (SIPG) method, to understand how consistency, symmetry, and penalty terms work together to create a stable and powerful system. Following this, in "Applications and Interdisciplinary Connections," we will witness this theory in action. We will see how DG's freedom from continuity provides a decisive advantage in fields like fluid dynamics, solid mechanics, and electromagnetism, and how it revolutionizes practical engineering tasks by simplifying the use of complex geometries and adaptive meshes.

Principles and Mechanisms

To truly appreciate the Discontinuous Galerkin (DG) method, we must first unlearn a deeply ingrained instinct: the pursuit of perfect continuity. For decades, methods for solving physical equations on computers, like the standard Continuous Galerkin (CG) Finite Element Method, have been built on a foundation of seamlessness. They imagine a solution constructed from many small, polynomial pieces, but insist that these pieces join together flawlessly, without any gaps or jumps. This is like building an arch from perfectly cut stones, where each block must fit its neighbors to within a hair's breadth. It’s elegant, but it can be rigid and demanding.

The DG method begins with a wonderfully liberating question: What if we didn't have to be so perfect? What if we built our solution from pieces—elements—that are allowed to disagree at their common boundaries? Imagine building with rough-hewn bricks and using mortar to fill the gaps and bind them together. This freedom is the heart of the DG philosophy. It allows for incredible flexibility: we can use complex, non-matching grids, easily vary the complexity of our solution from one region to another (a feature known as p-adaptivity), and design algorithms that are exceptionally well-suited for modern parallel supercomputers.

But this freedom comes with a profound challenge. If the elements are disconnected, how do they communicate? If the temperature is 20∘C20^\circ\text{C}20∘C on the right edge of one element and 22∘C22^\circ\text{C}22∘C on the left edge of its neighbor, what is the "real" temperature at the interface? More importantly, how does a physical law, like the conservation of heat, hold across this gap? The answer lies in a beautiful piece of mathematical machinery that acts as the "mortar" for our discontinuous bricks: the ​​numerical flux​​.

A Parliament of Elements

The journey to the DG formulation begins, as many do in computational mechanics, with the governing physical law (a partial differential equation, or PDE) and a clever application of integration by parts. In a traditional CG method, we multiply the PDE by a test function and integrate over the entire domain in one go. During integration by parts, the boundary terms that arise at the interfaces between elements magically cancel each other out precisely because the functions are continuous.

In DG, we take a different route. We perform the integration by parts on each element individually. When we do this, every element is an island, and each has a boundary integral left over from the process. When we try to assemble the global picture by summing up all the element-level equations, we find that the interface terms do not cancel. An element K−K^{-}K− sees a boundary term involving its trace values, and its neighbor K+K^{+}K+ sees a different boundary term with its own traces. This is not a problem; it is the entire point! These leftover interface terms are the channels through which the islands can communicate.

The core of the DG method is to replace these ambiguous, two-valued interface terms with a single, well-defined prescription: the numerical flux. The numerical flux, denoted F^\widehat{F}F, is a recipe that takes the two conflicting values from each side of an interface, u−u^{-}u− and u+u^{+}u+, and dictates the single physical flux that passes between them. This flux must satisfy two fundamental properties:

  1. ​​Consistency​​: If the solution happens to be continuous at an interface (i.e., u−=u+u^{-} = u^{+}u−=u+), the numerical flux must simplify to the true physical flux. This ensures that if our method gets the exact solution, it recognizes it as such.
  2. ​​Conservativity​​: The flux leaving element K−K^{-}K− must be equal to the flux entering element K+K^{+}K+. This enforces physical conservation laws across the artificial discontinuities, ensuring that no mass, momentum, or energy is lost in the gaps.

The Art of the Flux: An Anatomy of SIPG

There are many ways to design a numerical flux, each with its own character, leading to a whole family of DG methods. One of the most elegant and widely used for problems involving diffusion or elasticity (elliptic problems) is the ​​Symmetric Interior Penalty Galerkin (SIPG)​​ method. Let's dissect its formulation to see the principles at play.

For a diffusion problem like −(ku′)′=f- (k u')' = f−(ku′)′=f, the SIPG recipe for coupling elements involves adding three specific types of terms to the weak formulation at each interior interface. To describe them, we need two simple tools. For any quantity www at an interface, the ​​average​​ is {w}=12(w−+w+)\{w\} = \frac{1}{2}(w^{-} + w^{+}){w}=21​(w−+w+) and the ​​jump​​ is [w] = w^{-} - w}^{+}.

The SIPG bilinear form, which represents the energy of the system, is built from the standard element interior terms plus the following interface terms:

ah(uh,vh)=∑K∫Kkuh′vh′ dx−∑F∫F({kuh′}[vh]+{kvh′}[uh]) dS+∑F∫FσkhF[uh][vh] dSa_h(u_h, v_h) = \sum_{K} \int_K k u_h' v_h' \, dx - \sum_{F} \int_F \Big( \{k u_h'\}[v_h] + \{k v_h'\}[u_h] \Big) \, dS + \sum_{F} \int_F \frac{\sigma k}{h_F} [u_h][v_h] \, dSah​(uh​,vh​)=K∑​∫K​kuh′​vh′​dx−F∑​∫F​({kuh′​}[vh​]+{kvh′​}[uh​])dS+F∑​∫F​hF​σk​[uh​][vh​]dS
  1. ​​Consistency Term (−{kuh′}[vh]-\{k u_h'\} [v_h]−{kuh′​}[vh​])​​: This term ensures the method is consistent with the original PDE. It uses the average of the physical flux, {kuh′}\{k u_h'\}{kuh′​}, and couples it with the jump in the test function, [vh][v_h][vh​].

  2. ​​Symmetry Term (−{kvh′}[uh]-\{k v_h'\} [u_h]−{kvh′​}[uh​])​​: This term is the mirror image of the first. It is added to make the final system of equations symmetric. Symmetry is not just an aesthetic choice; it is deeply connected to the energy-minimizing nature of the underlying physics and is often crucial for proving that the numerical method converges optimally to the correct answer.

  3. ​​Penalty Term (+σkhF[uh][vh]+\frac{\sigma k}{h_F} [u_h][v_h]+hF​σk​[uh​][vh​])​​: This is the enforcer. It states that any jump in the solution, [uh][u_h][uh​], comes at a cost. The term adds a positive quantity to the system's "energy" that is proportional to the square of the jump. By penalizing this disagreement, the method forces the discontinuous solution to stay "close" to being continuous. This term is absolutely essential for the ​​stability​​ of the method. Without it, the mathematical system is "floppy" and can produce wildly oscillating, meaningless results. The ​​penalty parameter​​, σ\sigmaσ, must be chosen to be sufficiently large to provide this stability, with its value carefully determined based on the physical properties (kkk), the polynomial degree (ppp), and the element size (hFh_FhF​).

A simple calculation can make this tangible. For a bar modeled with two elements and a given discontinuous solution uhu_huh​, the total energy might be calculated as a(uh,uh)=4−12+36σa(u_h,u_h) = 4 - 12 + 36\sigmaa(uh​,uh​)=4−12+36σ. The '444' comes from the standard stiffness within the elements. The '−12-12−12' arises from the consistency and symmetry terms, reflecting the work done at the discontinuous interface. And the '+36σ+36\sigma+36σ' is the penalty energy, a stabilizing force that grows as we demand stricter agreement between the elements.

The Deeper Foundation

This new way of thinking requires a new mathematical language. Functions in a DG space are, by construction, not in the standard Sobolev space H1(Ω)H^1(\Omega)H1(Ω), which is the natural home for solutions to second-order PDEs. A discontinuous function has a derivative that involves delta functions at the jumps, which are not square-integrable, meaning the H1(Ω)H^1(\Omega)H1(Ω) norm is infinite.

So, how do we measure the error or energy of a discontinuous function? We invent a new yardstick. The ​​broken H1H^1H1 norm​​ measures the regularity of the function by simply summing the standard H1H^1H1 norms over each element individually.

∥v∥1,h2:=∑K∈Th∥v∥H1(K)2\|v\|_{1,h}^2 := \sum_{K \in \mathcal{T}_h} \|v\|_{H^1(K)}^2∥v∥1,h2​:=K∈Th​∑​∥v∥H1(K)2​

This captures the behavior within each element but ignores the jumps. To get the full picture, we augment it to form the ​​DG energy norm​​, which includes a weighted measure of all the jumps across the interfaces. It is this combined norm that the penalty term makes controllable, and it is with respect to this norm that the stability and convergence of DG methods are rigorously proven.

This framework also clarifies the role of the polynomial basis functions we use inside each element. Properties like the ​​partition of unity​​ (the sum of all basis functions is one) only need to hold locally on each element, which ensures that constant functions can be represented perfectly within that element. There is no need for a global partition of unity or a global mapping of nodes to degrees of freedom. The global coherence of the solution emerges dynamically through the weak coupling enforced by the numerical fluxes, not from the pre-emptive enforcement of continuity in the basis itself. This is a profound and powerful shift from the philosophy of continuous finite elements.

A Rich and Varied Family

The Symmetric Interior Penalty method is just one prominent member of the DG family. The principles of element-wise operations and numerical flux coupling are so fundamental that they have given rise to a menagerie of different schemes, each with its own advantages.

  • The ​​Local Discontinuous Galerkin (LDG)​​ method cleverly recasts the second-order problem as a system of first-order equations, a common strategy in physics. This changes how information is exchanged, leading to a mathematically equivalent but computationally different structure. While SIPG couples an element only to its immediate face-neighbors, LDG creates a wider "stencil" of communication, coupling an element to its neighbors' neighbors as well.

  • The ​​Bassi-Rebay 2 (BR2)​​ method provides another alternative to the penalty term, using mathematical constructs called "lifting operators" to achieve stability.

  • For problems governed by conservation laws (hyperbolic PDEs), like fluid dynamics or wave propagation, ​​upwind fluxes​​ are a natural choice. They use the direction of information flow (the "wind") to decide which of the two values at an interface should dominate, providing a physically motivated stabilization.

What unites this diverse family is the core philosophy: embrace discontinuity for its flexibility and power, and then systematically build back the necessary physical connections through the elegant and versatile language of numerical fluxes. It is a testament to the creativity of applied mathematics, showing that sometimes, the most powerful way to build a continuous whole is by starting with broken pieces.

Applications and Interdisciplinary Connections

Now that we have grappled with the principles and mechanisms of Discontinuous Galerkin (DG) methods, we might be tempted to view them as a clever, perhaps overly complicated, mathematical construction. But the real beauty of a physical or mathematical idea is not in its abstract elegance, but in what it allows us to do. What doors does it open? What new worlds can we explore? The true power of the DG formulation lies in its extraordinary versatility. By bravely embracing discontinuity—by liberating each small element of our problem from the strict rule of its neighbors—we gain a remarkable freedom to tackle some of the most challenging and fascinating problems across science and engineering.

Mastering the Wild Frontiers of Physics

Many of the most interesting phenomena in nature are not smooth and gentle. They are abrupt, sharp, and violent. Think of the thunderous crack of a sonic boom, the catastrophic fracture of a material, or the intricate dance of electromagnetic waves. These are the "wild frontiers" where simpler numerical methods often struggle, and where DG truly shines.

In fluid dynamics, the accurate capture of shock waves and contact discontinuities is a paramount challenge. While classical high-resolution schemes like MUSCL have been workhorses for decades, they are fundamentally built on a philosophy of averaging. They store an average value in each cell and then reconstruct a more detailed picture from these averages. A DG method takes a more ambitious approach. It doesn't just store an average; it directly evolves a rich, polynomial description of the solution inside each element. This means that instead of just one piece of information per cell, a P1 DG method, for instance, directly evolves two degrees of freedom that define a complete linear profile. The numerical flux at the interface then acts not just to update the average, but to inform the evolution of all the polynomial modes within the cell, giving it a much more detailed and local view of the physics. This richer local data structure is what gives DG its uncanny ability to resolve complex flow features with stunning precision.

This same flexibility pays enormous dividends when we move from fluids to solids. Consider the bending of a beam, a classic problem in structural engineering. The underlying physics is described by a fourth-order differential equation, EIw(4)(x)=q(x)E I w^{(4)}(x) = q(x)EIw(4)(x)=q(x). This poses a serious problem for standard finite element methods that use simple, C0C^0C0-continuous functions (where the function itself is continuous, but its derivative is not). The bending energy depends on the second derivative, w′′w''w′′, which is not well-behaved for these simple elements, leading to instability and spurious oscillations. The traditional solution is to construct complex "C1C^1C1-conforming" elements where both the deflection and its slope are forced to be continuous. While they work, they are notoriously difficult to implement, especially in two or three dimensions.

The DG method offers a breathtakingly simple alternative. It happily uses simple, discontinuous polynomials and enforces the necessary physics weakly through interface terms. By adding carefully designed penalty terms that control the jumps in both the beam's deflection and its rotation across elements, the method robustly suppresses oscillations and provides accurate solutions. This approach beautifully sidesteps the need for complicated elements, trading strong continuity for the elegance of a consistent weak formulation. This principle extends to the cutting edge of materials science. In advanced models like strain gradient plasticity, the material's behavior depends not just on the strain, but on the gradient of the plastic strain, ∇εp\nabla \varepsilon^{p}∇εp. This introduces higher-order, diffusion-like terms into the equations. DG methods, with their natural framework for handling derivatives and jumps at interfaces via numerical fluxes and penalty terms, provide a powerful and systematic way to discretize these complex, modern material laws.

The reach of DG extends even further, into the realm of electromagnetism. The simulation of electromagnetic waves, governed by Maxwell's equations, has traditionally relied on specialized "edge elements" (like Nédélec elements) that are part of a deep mathematical structure known as the de Rham complex. These elements are designed to be "H(curl)-conforming," meaning they properly handle the continuity of the tangential component of the electric or magnetic fields. The DG method, particularly in its symmetric interior penalty (SIPG) form, can be seen as a generalization of this idea. It works with fully discontinuous fields but weakly enforces the continuity of the tangential components using penalty terms. In fact, there is a profound connection: as the penalty parameter in the DG formulation is taken to infinity, the DG solution actually converges to the solution of the conforming Nédélec method. This reveals DG not as a competitor, but as a more flexible parent theory from which classical methods can be recovered.

The Art of the Possible: Flexibility and Adaptivity

The freedom of discontinuity is not just for tackling difficult physics; it revolutionizes how we can build our simulations in the first place. Real-world engineering problems rarely come in neat, square boxes. They involve complex, curved geometries designed in Computer-Aided Design (CAD) software. Often, we want to mesh different parts of a domain with different strategies—a structured grid here, an unstructured one there. For traditional methods that demand a conforming "vertex-to-vertex" mesh, stitching these mismatched pieces together is a major headache.

DG methods, combined with techniques like mortar methods, thrive in this environment. Because they are already designed to handle jumps, they don't mind if the mesh on one side of an interface doesn't match the mesh on the other. The key is to establish a single, common geometric definition of the interface and then use a shared integration rule to compute the fluxes between the non-matching elements. This ensures conservation of quantities like mass and momentum, allowing for the seamless coupling of disparate mesh types and a direct path from complex CAD geometry to high-fidelity simulation.

This flexibility also enables powerful adaptivity schemes. In many problems, the interesting action is concentrated in small regions—around the tip of an airplane wing, at the front of a shockwave, or near a crack in a material. It is wasteful to use a high-resolution mesh everywhere. DG offers a beautiful solution through so-called "ppp-adaptivity." We can use high-degree polynomials (ppp) to achieve very high accuracy precisely where it's needed, while using computationally cheaper, low-degree polynomials elsewhere. A key theoretical result is that as long as one uses a properly dissipative numerical flux, mixing polynomial degrees in adjacent elements does not inherently destroy the stability of the scheme. The main effect is that the most restrictive time-step constraint will come from the regions with the highest polynomial degree, a small price to pay for such targeted power.

The physical world is also rarely static. Wings flap, hearts beat, and bridges sway. Simulating these phenomena requires a mesh that can move and deform with the object. DG methods are exceptionally well-suited for these Arbitrary Lagrangian-Eulerian (ALE) formulations. By transforming the equations of motion onto a fixed reference element, the motion of the physical mesh is elegantly accounted for by modifying the flux to include the grid velocity (f∗(u)=f(u)−wuf^{\ast}(u) = f(u) - w uf∗(u)=f(u)−wu) and by ensuring that the geometry and mesh motion are consistent through a "Geometric Conservation Law." This allows DG to accurately and conservatively simulate complex problems on time-dependent domains, from fluid-structure interaction to biological growth.

The Engineer's Dilemma: Taming the Beast

With all this power and freedom, one might ask: what's the catch? Like any powerful tool, DG comes with its own set of challenges. But what is truly remarkable is how the DG framework itself provides the tools to overcome them.

One of the most notorious challenges in computational solid mechanics is "volumetric locking." This occurs when simulating nearly incompressible materials, like rubber or certain biological tissues, where the Poisson's ratio approaches 0.50.50.5. For standard displacement-based finite elements, the enormous stiffness associated with volume change can overwhelm the shear response, leading to an artificially rigid behavior and completely wrong results. One might hope that DG's weak continuity would solve this, but the problem is more fundamental. The solution is to move to a "mixed" formulation, introducing pressure as an independent variable to handle the incompressibility constraint. The discontinuous nature of DG spaces is a huge advantage here, as it allows for the use of simple, discontinuous pressure fields that can be statically condensed or locally solved, providing a robust and locking-free solution.

Perhaps the most significant practical challenge is computational cost. The very freedom that gives DG its power—the discontinuous basis functions—means that a DG simulation typically has many more unknowns than a continuous Galerkin simulation on the same mesh. This leads to larger linear systems that are more expensive to solve. This is the "price of freedom." However, the story doesn't end there. Ingenious developments within the DG family have largely tamed this beast.

One of the most important innovations is the Hybridizable Discontinuous Galerkin (HDG) method. The core idea of HDG is to realize that most of the unknowns—those deep inside each element—don't need to be part of the global conversation. They can be "statically condensed," or solved for purely in terms of unknowns that live only on the faces of the elements. This leaves a much smaller global system to be solved, involving only the trace of the solution on the mesh skeleton. This is a tremendous advantage. For a typical second-order problem, the HDG system is not only smaller but also significantly better-conditioned than its SIPG counterpart, meaning that iterative solvers converge much more quickly. For instance, comparing SIPG and HDG for a Poisson problem with quadratic elements (p=2p=2p=2), the HDG method results in a global system with about 33%33\%33% fewer unknowns and a matrix that is sparser and whose condition number scales much more favorably with mesh size (O(h−1)O(h^{-1})O(h−1) versus O(h−2)O(h^{-2})O(h−2)). This translates directly into faster and more efficient simulations, making DG a highly competitive tool for large-scale applications.

In the end, we see that the Discontinuous Galerkin method is not a single, monolithic entity. It is a rich, vibrant, and expanding framework of ideas. Its central principle of localizing physics to individual elements and communicating through fluxes provides a unified foundation for solving an astonishing range of problems—from the practicalities of handling complex industrial geometries to the frontiers of modeling new materials, all while continuously evolving to become more powerful and efficient. It is a beautiful testament to the power of a good idea.