|Title:||The Boundary Element Method for Shape Optimization in 3D|
|Advisor:||doc. RNDr. Jiří Bouchala, Ph.D.|
|Affiliation:||Department of Applied Mathematics|
|Faculty of Electrical Engineering and Computer Science|
|VŠB - Technical University of Ostrava|
|Defense date:||April 27, 2017|
|Ph.D. thesis summary|
|Review (Univ.-Prof. Dr. Olaf Steinbach)|
|Review (doc. RNDr. Tomáš Vejchodský, Ph.D.)|
|Review (doc. Ing. Dalibor Lukáš, Ph.D.)|
|List of publications|
The rapid development of the boundary element method (BEM) during the last decades has allowed it to be considered in the shape optimization context, where it is necessary to solve a given state problem many times. We present a BEM-based shape optimization concept, which can also be used for the solution of inverse problems including the presented Bernoulli free-surface problem. To separate the computational and optimization meshes we use the hierarchy of control meshes constructed by means of subdivision surfaces known from the computer graphics. In the thesis we also address the important topic of efficient implementation of BEM on modern hardware architectures and accelerators. The theory is supported by a series of numerical experiments validating the proposed approach.
Problems arising in the design and manufacture of components in many industrial areas can often be stated as minimization problems over a class of admissible shapes. Differently from optimal control problems with the state satisfying a certain partial differential equation on a fixed domain, the state variable in shape optimization problems is controlled by the shape of the computational domain. Thus, the set of admissible controls does not directly build the structure of a vector space. The pioneering attempts to mathematically describe such problems are due to Hadamard , where the author optimizes the shape of a clamped plate and describes the admissible domains by normal perturbations of a smooth reference boundary. This work gave birth to shape calculus, a research field further pursued in the now classical monographs [34,114,125]. In this thesis we use the tools provided by shape calculus to solve free boundary problems of the Bernoulli type in combination with the boundary element method (BEM) for the numerical analysis of the state and adjoint problems and the subdivision surfaces representing the admissible domains.
There are several possible approaches to solving the Bernoulli type problems. First of all, the existence analysis and regularity of solutions has been studied in, e.g., [2,5,47,49,50]. The trial methods as in [18,59,60,130,131] relax the boundary value problem overdetermined by both Dirichlet and Neumann boundary condition by iteratively solving the problem with one condition only and subsequently moving the free boundary in such a way that the second condition is satisfied exactly. The approach adopted in the following text is based on the reformulation of the problem as a shape optimization problem with one of the boundary conditions enforced by minimizing a least-squares-type tracking functional. This strategy has been followed together with the fictitious domain method for solving the state and adjoint boundary value problems in [45,62,63,64], wavelet-based BEM has been used in [39,41,42,43,57].
To find a minimizer of the studied cost functional one can rely on gradient-free methods. Despite the fact that the aim of such methods is to find the global minimizer, they usually require a high number of cost evaluations which is equal to the number of possibly costly numerical analyses. The gradient-based methods, on the other hand, search for a local minimizer but provide an improved speed of convergence. The gradient information can be provided either by the first-discretize-then-optimize approach, where the state problem is first discretized and the sensitivity analysis is performed on the discrete level, or the first-optimize-then-discretize approach relying on the speed method or the perturbation of identity as described in the classical monographs [34,65,125] and followed for the solution of the Bernoulli problem in this thesis and in [61,77]. For completeness, let us also mention the possibility of automatic differentiation techniques , which consider the whole computer program solving the state problem as a series of elementary arithmetic operations and apply the chain rule for automatized differentiation of the program with respect to the design variables.
The boundary element method employed in the thesis provides an efficient tool for solving the underlying boundary value problems if it is possible to reformulate them as boundary integral equations. Although this is especially the case for problems with (piecewise) constant material coefficients, it also allows for the natural solution of exterior problems. The boundary element approach seems natural for solving shape optimization problems since the shape of the volume is fully given by its boundary. In the context of volume finite element techniques, the boundary perturbations have to be transferred to the volume mesh. While this can be easy for a number of academic examples, where the unknown part of the boundary can be represented, e.g., as a graph, for general surfaces this is much more difficult. One possible approach is to solve an auxiliary linear elasticity problem with boundary conditions given by the current perturbation, however, strong deformations of the boundary would still require remeshing of the volume mesh after every couple of iterations. Another option is to embed diffeomorphic shape perturbations in the variational formulation of the state problem as in [67,110] and keep the mesh is constant during the whole optimization process.
The novelty of this thesis lies in the combination of BEM for the solution of state and adjoint boundary value problems and the subdivision surfaces used for the discretization of boundary perturbations as already presented in . The idea of subdivision as a tool for constructing smooth curves as a limit of a sequence of control polygons is well-known in computer graphics and dates back to the corner cutting algorithm of Chaikin . In this work we concentrate on subdivision surfaces based on triangular meshes defining quartic splines in the regular case. This subdivision technique has been originally generalized for triangular meshes of arbitrary topology by Loop . In the proposed shape optimization algorithm the coarse control polygon defines the set of design parameters, while the mesh obtained after n subdivision steps serves for the boundary element analysis. Since the subdivision basis functions define smooth perturbations, no mesh smoothing steps are necessary. The nature of the subdivision process also allows us to add design parameters when the optimum in the current design space is found and thus also serves as a globalization strategy. Further works on the topic of Loop’s subdivision include, e.g., [123,126,141,142]. The combination of finite element approximation and subdivision have been discussed in , the isogeometric approach with the subdivision functions used for approximating both the geometry and the state variable has been suggested in [22,23]. The regularity of subdivision basis functions and the necessary approximation properties are further discussed in . The isogeometric approach has also been used for shape optimization in [9, 11].
The second part of the thesis is devoted to the efficient implementation of BEM not only on modern PCs but also in the environment of High Performance Computing (HPC) centres. The drawback of the classical BEM is its quadratic complexity both in terms of the computational time and memory requirements restricting its applicability to moderate problem dimensions. To overcome this issue, several fast BEM methods can be employed to lower the complexity to almost linear. This includes the fast multipole method [52,107,120] based on the approximation of the kernel by a truncated series, or the adaptive cross approximation (ACA) [12,119] building low-rank blocks based on an algebraic point of view. Although these sparsification methods are inevitable for large-scale engineering problems, it is still crucial to efficiently assemble the so-called non-admissible full blocks. Moreover, in the case of ACA, the low-rank approximation requires evaluation of several rows and columns of every admissible block, which relies on the standard full assembly. In the thesis we thus concentrate on the acceleration of the standard BEM assembly in shared memory.
The distribution of the workload among available CPU cores by, e.g., OpenMP pragmas has become more or less standard in similar scientific codes. Although such parallelization techniques lead to a significant speedup, overlooking further acceleration by utilizing the Single Instruction Multiple Data (SIMD) concept can hardly exploit the full potential of modern CPUs. The first Intel’s effort to employ vector instructions on multiple operands dates back to 1997 with the introduction of the MMX instructions set reusing the existing scalar registers for integer operations only. The SSE-SSE4 instruction sets introduced in 1999-2006 added eight 128-bit registers designed to work in parallel with four single-precision or two double-precision floating-point operands. The AVX-AVX2 sets supported by Intel processors since 2011-2013 further increase the size of the vector register to 256 bit and incrementally add more instructions including the three-operand Fused Multiply Add (FMA) combining addition and multiplication in one step. The need for code vectorization is even more apparent in connection with the Many Integrated Core (MIC) architecture represented by the Intel Xeon Phi (co)processors. Currently, the Xeon Phi coprocessors support the Initial Many Core Instructions (IMCI) set doubling the size of the AVX2 registers. Although programming for an accelerator may seem a daunting task, the MIC architecture allows to reuse the majority of the already existing CPU code, which is in contrast with the GP-GPU acceleration on graphics processing units. Moreover, the current AVX-512 instruction set providing 512-bit registers is supported (at least partially) both by the Skylake CPU architecture and the Knights Landing (KNL) MIC architecture. Contrary to the Knights Corner (KNC) MIC copocessors, the KNL version is also available as a standalone host processor, possibly showing the future of high performance computing.
The ideas of vectorization in connection with BEM have already been discussed in several publications. The automatic vectorization based on the capabilities of modern compilers is discussed in , the direct use of vector intrinsic functions for the BEM matrix assembly, however, without a detailed look on the integration of singular kernels, is presented in , and the vectorization of the evaluation of the representation formula is shown in . In the thesis we present two different strategies for code vectorization, namely by using OpenMP 4.0 pragmas [101,109] and the Vc library wrapping the vector intrinsic functions [84,98]. Moreover, the computationally most intensive parts allow the offload to MIC coprocessors further speeding up the assembly . In contrast to the previously mentioned references we also discuss the treatment of singularities in the underlying surface integrals both by the semi-analytic  and fully numeric quadrature schemes . The presented approaches build the core of the BEM4I library  developed at the IT4Innovations Czech National Supercomputing Center.
The structure of the thesis follows. Section 2 first reviews basic properties of function spaces appearing throughout the thesis and presents the shape optimization problem under consideration together with its solvability result under reasonable regularity assumptions and the derivation of the shape derivative. The final part of the section summarizes the boundary integral approach to solving boundary value problems. Section 3 is devoted to the representation of smooth surfaces generated by Loop’s subdivision of quartic box splines. These surfaces are used to represent the unknown shape and to discretize the admissible boundary perturbations. In Section 4 we first review the discretization of boundary integral equations, i.e., the boundary element method. We present two approaches to the assembly of system matrices, namely the semi-analytic and fully numerical quadrature, and describe its efficient realization on modern hardware including the acceleration by the Intel Xeon Phi coprocessors. With all the necessary tools available we present the discrete version of the optimization problem validated by solving two academic examples and a real-life experiment originating in the ABB Corporate Research Center Switzerland.
The aim of the thesis was to construct a multiresolution shape optimization algorithm based on two key ingredients. For the description of moving boundaries we used the subdivision surfaces originally introduced for the purposes of computer graphics. Their important feature is the ability to modify a single surface on a hierarchy of control meshes and thus control the locality of deformations. The increasingly finer meshes serving for optimization allow to describe local details of the sought optimal surface and serve as a tool for globalization of the gradient-based algorithm. The subdivision surfaces also prevent non-physical oscillations in the geometry, mesh intersection, or element inversion. Such pathological distortions are avoided by applying perturbation fields proportional to the current optimization mesh size – large shape changes are expected at the initial stage of the optimization process, while more localized changes follow later on. As a result, no mesh smoothing or regeneration procedures are necessary throughout the optimization.
The second ingredient is the application of the boundary element method for the treatment of the underlying state and adjoint boundary value problems. The method seems natural for shape optimization problems, since the shape of a domain is fully described by its boundary. Furthermore, it does not require discretization of the volume and the mapping from the boundary perturbation to the deformation of the volume mesh is avoided. For non-trivial problems, such a mapping can be performed by solving an auxiliary problem of linear elasticity with the boundary conditions defined by the traction on the boundary bringing additional complexity to the optimization problem. In the thesis we presented basic strategies for an efficient implementation of the boundary element method. The presented code builds the core of the BEM4I library, which in addition to the presented material implements advanced methods for the acceleration of the boundary element method including the adaptive cross approximation for matrix sparsification with similar scaling results with respect to the number of threads and the size of vector registers.
For reasonable regularity assumptions we presented results regarding the existence of an optimal domain in the continuous setting. Following [65, 114] the next step would be to prove the existence of a minimizer in the discrete setting and the convergence of the discrete optimal solutions to the continuous one. In the context of boundary element discretization this includes the analysis of the numerical error arising from the approximation of the computational boundary by a polygonal mesh. Such topics have been discussed in the works [103,104], where the author uses interpolating triangular meshes and provides conditions under which the standard boundary element convergence results hold. Differently from this approach, the control meshes of the subdivision surfaces do not have the property of interpolating the limit surface and thus the analysis would have to be adapted. Although there exist interpolating subdivision schemes, they usually provide surfaces of lower regularity. On the other hand, instead of using a fine control mesh of the Loop subdivision scheme for the boundary element analysis, one could transfer the nodes of such a mesh to the limit surface by using the limit evaluation masks obtaining an interpolation of the limit geometry. A rather simplified approach is presented in, e.g., [37,41], where the authors only consider the discretization of the shape perturbations and assume that the boundary element analysis is performed exactly. This simplification and further restriction to star-shaped domains allows to prove uniqueness of the solution by means of the Ritz method and a-priori convergence rates are given.
The current multiresolution algorithm requires a coarse optimization mesh for the initial guess of the to-be-optimized boundary. This is restrictive in the case where detailed features of the free surface are already present in the design (e.g., optimization of an engine, aerodynamics of a car, etc.). The required coarse mesh can be constructed by fitting a control mesh with subdivision connectivity to the original fine input [35,90]. The details lost by the simplification can be stored in vectors added to the mesh after every subdivision step and the algorithm proposed here could be extended to such cases .
Let us also mention that the subdivision based strategy may also resemble the multigrid approach from . In both cases, the free surface is represented on a hierarchy of meshes – in addition to the proposed subdivision approach the multigrid also accelerates the optimization process by performing the numerical analysis at different levels. The subdivision (prolongation) and coarsening (restriction) operators from Section 3, however, may also prove useful also in the multigrid context.
In the thesis we only used the subdivision surfaces for the representation of geometries. It is, however, possible to follow the steps of the isogeometric analysis and use the smooth ansatz functions defined by the underlying B-splines or subdivision-based functions for the boundary element analysis. Such techniques have been successfully employed, e.g., in the analysis of solids and shells by the finite [9,11,22,23,27] and boundary element methods .