|Title||Parallel Adaptive Sparse Grids|
|Key words||Sparse Grids, Hyperbolic Crosspoints, Tensor Product, Approximation, Finite Differences, Adaptive Grid Refinement, Dynamic Load-Balancing, Distributed Memory, Message Passing, Space-Filling Curves, Key Based Addressing, Hash Storage|
Sparse GridsWe consider the solution of initial-boundary value problems of partial differential equations on sparse grid discretizations, both parabolic and hyperbolic problems. Sparse grids can be constructed from tensor products of one dimensional spaces, where specific parts of the space are cut off. Based on the hierarchic basis functions (or other pre-wavelet bases), products of functions with a support less than tol are cut off. This is depicted on the right hand side.
The sparse grid construction is related to `hyperbolic crosspoints' and other approximation schemes. We are now interested in the discretization of PDEs. Besides the Galerkin approach (FEM) and the `combination technique' based on multivariate extrapolation, we propose a finite differcence type of discretization. It is based on dimensional splitting and the (nodal to) hierarchical basis transform H. Each differential operator Dj along a direction j is discretized with finite differences. It is combined with a hierarchical basis transform and back-transform along all directions but the direction of interest j . The global differential operator reads.
A typical stiffness matrix looks like this:
It is not sparse, but a matrix multiply can be implemented in linear time O(n).
Adaptive Sparse Grids:In the presence of sigularities of the solution the convergence of standard discretizations is not as rapid as for smooth, regular solutions. One way to overcome this problem is adaptivity. The grid is refined during the computation where indicated by an error indicator:
Load Balancing:The parallelization of adaptively refined grids in one dimension is simple. Consider a refined grid, the domain is partitioned into intervals of equal workload. Each processor holds the same number of nodes:
The partition can be stored by (p-1) separators (numbers) for p processors. Let us look at the higher dimensional case
We can map the domain to the interval by a space-filling curve. This mapping induces an order of the nodes and elements. We can proceed like in the one dimensional case using (p-1) separators
which results in the following domain decomposition.
Note that each processor is assigned the same work load (number of nodes), but thesubdomain boundaries are sub-optimal. Solving both the partition problem and minmizing the boundaries, which are a measure of communication cost during the parallel computation, is an NP-hard problem.
The main advantages of this partition method are:
Key based implementation:We need to store a hierarchicaly nested sequence of adapted grids which includes nodes (= degrees of freedom) and their geometric relationship on the grid and their relation to nodes on different grid levels. In contrast to standard data structures based on pointers and trees, we choose a key based addressing scheme and hash storage. A hash table T contains all nodes (on a processor).
Each node can be accessed via its coordinates, which is coded uniquely in its position on the space-filling curve. This unique key is mapped to the hash table via a surjective hash function. Collsions are resolved with chaining in the C++ STL implementation in use.
This results in a statistical constant time access O(1) and a substancial saving in computer memory. An even greater improvement comes from the fact that we do not have to administer pointers, which causes trouble on distributed memory parallel computers, where one has to create and maintain copies of lots of entities otherwise.
Parallelization:The components of our adaptive multigrid code have to be parallelized. A single data decomposition is used on the distributed memory computer. However, each component requires a different reatment.
For the iterative solution of the equation system we need matrix multiplies Ax. This can be done in several steps: The operator is defined by . The parallelization is based on a parallel version of H, Di and H-1. The one-dimensional finite difference stencil Di and the transformation from nodal to hierarchical basis H can be both implemented by a single communication step to fill the necessary ghost nodes, followed by a single computation step:
The load balancing based on space-filling curves can be implemented as a one stage parallel sort (radix or bucket sort) based on the previous partition. This results in a well balanced parallel sorting itself and a low number and volume of data transfers.
Results:Here are some three dimensional results:
regular sparse grids, solution of a convection-diffusion equation, mapped to 8 processors
computing times for the example above, on Parnass2
and on a Cray T3E-600
adaptively refined sparse grids, solution of a convection-diffusion equation, adaptive resolution of boundary layers
computing times for the example above, on Parnass2
|In cooperation with||SFB 256 "Nonlinear Partial Differential Equations"|