Problèmes computationnels inverses : de la modélisation multi-échelle à la quantification de l'incertitude

Link to PDF file Tous les résumés
Taille : 66 kb
Organisateurs :
  • Luz Angélica Caudillo-Mata (University of British Columbia)
  • Julianne Chung (Virginia Tech)
Horaire:
    • Luz Angelica Caudillo Mata
      University of British Columbia
    Multiscale and Upscaling Methods to Simulate Geophysical Electromagnetic Responses
    Link to PDF file Résumé en PDF
    Taille : 41 kb
    Accurate and efficient simulation of electromagnetic (EM) responses in large-scale settings with highly heterogeneous media and features varying at multiple spatial scales is crucial to the exploration and imaging of geological formations in a wide range of geophysical applications. One major challenge in practice to perform this type of simulation is the excessive computational cost it involves. This cost comes from solving a giant system of equations that results from the discretization of this type of large-scale settings, which often require very detailed meshes. Multiscale and Upscaling finite volume (FV) and finite element (FE) techniques have been extensively studied in the field of modeling flow in heterogeneous porous media, where they have been successfully used to drastically reduce the size of the linear system while producing accurate solutions similar to that obtained with FE or FV discretization schemes on a fine mesh. Recognizing the success that these techniques have had in fluid flow applications, we extended their use for application in EM modeling. Specifically, we developed an upscaling framework and a multiscale FV with oversampling method for the quasi-static Maxwell's equations in the frequency domain. In this talk, we discuss the two methods we propose and show a comparison of the upscaling framework with the multiscale FV method with and without oversampling for a synthetic setting of a large-loop EM survey over a mineral deposit.
    • Jie Sun
      Clarkson University
    Kernel-based Reconstruction of Spatially Embedded Complex Networks
    Link to PDF file Résumé en PDF
    Taille : 39 kb
    Given that numerous complex networks in real-life are spatially embedded, we ask, can such spatial constraints be exploited (rather than suffered from) to make better inference? Based on the preference of short-range spatial connections, we develop a kernel-based Lasso framework to infer complex spatial networks. We show by numerical experiments that the proposed method improved significantly upon existing network inference techniques. Importantly, such enhancement is achieved even when the exact spatial distribution of the embedded edges is unknown, making the method particularly relevant as a computational tool to efficiently and reliably infer large spatial networks in practice.
    • Wenke Wilhelms
      University of British Columbia
    The mimetic multiscale method for electromagnetics at a borehole with casing
    Link to PDF file Résumé en PDF
    Taille : 39 kb
    A cased borehole setting is a good example of a geophysical problem that involves different length scales: metal casing (highly conductive) with a few millimeters in thickness as well as length scales of interest in meters and kilometers. This leads to a large model where including fine-scale information to an adequate level of detail comes with the risk of computationally too expensive or even intractable simulations. The mimetic multiscale method is developed to overcome the impracticably large linear systems that result from large models and their very detailed meshes. Multiscale methods provide a framework to model features on finer scales than the simulation mesh. Mimetic methods mimic the behavior of the continuous operators in the discrete setting, for instance magnetic fields on the coarse mesh are discretely divergence-free. Thus, spurious modes are prevented and the solutions are physical. Maxwell’s equations are used to interpolate between the fine mesh, where the electrical conductivity is discretized on, and a nested coarse mesh on which the fields are simulated. This yields a natural homogenization that accounts for fine-scale conductivity changes. It reduces the number of degrees of freedom enormously. We demonstrate the effectiveness of the mimetic multiscale method at a vertical borehole with casing by simulating electric fields on the coarse mesh and comparing the results with the reference solution on the fine mesh.
    • Christina Frederick
      Georgia Tech
    Multiscale methods for seafloor identification in sonar imagery
    Link to PDF file Résumé en PDF
    Taille : 39 kb
    Modern day sonar systems are capable of obtaining acoustic measurements of the ocean floor with an unprecedented level of precision, yet only about .05\% of the oceans are mapped to a resolution of a couple meters. There is a critical need for the development of efficient methods for remote acoustic classification of the seafloor, especially for time-sensitive tasks such as finding plane wreckage. The main computational obstacle is dealing with the complex scattering effects of structures on the ocean floor. I will discuss a recently developed reduced order modeling strategy that incorporates simulations of Helmholtz equations on a wide range of spatial scales, allowing for detailed recovery of seafloor parameters including the material type and roughness.
    • Susana Gomez Gomez
      Universidad Nacional Autonoma de Mexico
    Caracterization of Carbonate Oil Reservoirs, with Euclidean and with Fractal Geometries, using Well Tests Data
    Link to PDF file Résumé en PDF
    Taille : 41 kb
    In order to predict the production of a well , it is necessary to solve a parameter identification inverse problem. The parameters describe the characterisitics of the porous media around the well. In the case of Carbonate Media, it is important to consider triple porosity and double permeability models that have not been sufficiently studied in the past. Currently also, most of these reservoirs are studied by means of Euclidean models, which implicitly assume a uniform distribution of fractures and that all fractures are interconnected. However, there is evidence that the above assumptions are not representative of some of these systems. Fractal theory considers a nonuniform distribution of fractures and the presence of fractures at different scales; thus, it can contribute to explain the behavior of many fractured reservoirs. In this project we work with two triple porosity and double permeability models: the first with Euclidean geometry and the second with Fractal geometry. We will present results of sensitivity analysis to understand the effect of the parameters of both models, on the presure. Using the sensitivity information, we identify the parameters using the Tunneling Global Optimization Method. We will present results on synthetic and on Real Mexican Carbonate Reservoirs.
    • Eric Chung
      Chinese University of Hong Kong
    Cluster-based Generalized Multiscale Finite Element Method for elliptic PDEs with random coefficients
    Link to PDF file Résumé en PDF
    Taille : 42 kb
    We propose a generalized multiscale finite element method (GMsFEM) based on clustering algorithm to study the elliptic PDEs with random coefficients in the multi-query setting. Our method consists of offine and online stages. In the offline stage, we construct a small number of reduced basis functions within each coarse grid block, which can then be used to approximate the multiscale finite element basis functions. In the online stage, we can obtain the multiscale finite element basis very efficiently on a coarse grid by using the pre-computed multiscale basis. The new GMsFEM can be applied to multiscale SPDE starting with a relatively coarse grid, without requiring the coarsest grid to resolve the smallest-scale of the solution. The new method offers considerable savings in solving multiscale SPDEs. Numerical results are presented to demonstrate the accuracy and efficiency of the proposed method for several multiscale stochastic problems without scale separation. The research of Eric Chung is supported by Hong Kong RGC General Research Fund (Project 14317516).
    • Yury Elena García Puerta
      Centro de Investigación en Matemáticas, A.C.
    BAYESIAN ANALYSIS OF A MULTI-PATHOGEN MODEL
    Link to PDF file Résumé en PDF
    Taille : 39 kb
    Influenza and Respiratory Syncytial Virus are the leading etiologic agents of seasonal Acute Respiratory Infections (ARI). Medical doctors around the world typically gather weekly consultation reports of ARI. However, laboratory tests necessary for identification of the virus are not always conducted in clinical settings. A relevant problem is to infer the interaction of Influenza and RSV from aggregated ARI data. In this work we consider a particle MCMC method to perform Bayesian inference on the kinetic parameters of a multi-pathogen epidemic model. Each iteration of the scheme requires an estimate of the marginal likelihood from the output of a sequential Monte Carlo Scheme (known as a particle filter). We use linear noise approximation (LNA) to estimate this marginal likelihood and use hierarchical model to tell apart each disease from the aggregated data. We have used ARI data, and Influenza and RSV records from a sentinel program at a central hospital, from San Luis Potosi, Mexico to test our method.
    • Julianne Chung
      Virginia Tech
    Computationally efficient methods for Bayesian inversion and uncertainty quantification
    Link to PDF file Résumé en PDF
    Taille : 40 kb
    Computing solutions to large-scale Bayesian inverse problems can be a challenging task. Recent work on generalized hybrid iterative approaches has enabled the efficient computation of Tikhonov regularized solutions (e.g., computing maximum a posteriori estimates), but the computational costs to obtain uncertainty estimates for these solutions still remain prohibitive. In this talk, we describe efficient methods that are based on the generalized Golub-Kahan bidiagonalization for estimating variances of the posterior distribution and for efficient sampling from the posterior distribution. These methods are ideal for problems where explicit computation of the square root and inverse of the covariance kernel for the prior covariance matrix is not feasible. Such scenarios arise, for example, in problems where covariance kernels are defined on irregular grids, e.g., those from the Mat\'{e}rn class, and the resulting covariance matrices are only available via matrix-vector multiplication. Numerical examples from seismic imaging applications demonstrate the effectiveness of the described approaches. This is joint work with Arvind Saibaba, North Carolina State University.
    • Olivier Zahm
      MIT
    Certified dimension reduction in nonlinear Bayesian inverse problems
    Link to PDF file Résumé en PDF
    Taille : 40 kb
    A large-scale Bayesian inverse problem has a low effective dimension when the data are informative only on a low-dimensional subspace of the input parameter space. Detecting this subspace is essential for reducing the complexity of the inverse problem. For example, this subspace can be used to improve the performance of Markov chain Monte Carlo algorithms and to facilitate the construction of surrogates for expensive likelihood functions. Several different methods have recently been proposed to construct such a subspace---in particular, gradient-based approaches like the likelihood-informed subspace method and the active subspace method. In the context of nonlinear inverse problems, however, both methods are essentially heuristics whose approximation properties, relative to an optimal approximation, are poorly understood. We introduce a best approximation problem for the posterior distribution, which defines the best possible parameter space decomposition. Solving this problem in general non-Gaussian/non-linear cases, however, appears intractable. Instead, we develop a new bound on the Kullback-Leibler divergence between the posterior distribution and the approximation induced by a parameter space decomposition. We then identify the subspace that minimizes this bound, and compute it using gradients of the likelihood function. This approach allows the approximation error to be rigorously controlled. We also address the question of efficient computation of the parameter space decomposition when only a limited number of gradient evaluations are allowed. A numerical comparison with existing methods favorably illustrates the performance of our new approach. This is joint work with Youssef Marzouk, Tiangang Cui, Kody Law, and Alessio Spantini.
    • Alen Alexanderian
      North Carolina State University
    Scalable methods for D-optimal experimental design in large-scale Bayesian linear inverse problems
    Link to PDF file Résumé en PDF
    Taille : 40 kb
    We address the problem of optimal experimental design (OED) for large-scale Bayesian linear inverse problems governed by PDEs. Specifically, the goal is to find optimal placement of sensors where observational data are collected. We focus on sensor placements that maximize the expected information gain. That is, we rely on a Bayesian D-optimal criterion, given by the expected Kullback-Leibler divergence from prior to posterior. In the infinite-dimensional Hilbert space setting, assuming a Gaussian prior and an additive Gaussian noise model, the analytic expression for the OED objective function is given by the log-determinant of a perturbation of the identity by a prior-preconditioned data misfit Hessian operator. We introduce efficient methods to make the computation of OED objective function and its gradient computationally tractable. Numerical results illustrating our framework will be provided in the context of D-optimal sensor placement for optimal recovery of initial state in a time-dependent advection-diffusion problem.
    • Misha Kilmer
      Tufts University
    Krylov Recycling for Sequences of Shifted Systems Arising in Image Reconstruction
    Link to PDF file Résumé en PDF
    Taille : 40 kb
    Discrete image reconstruction or restoration problems are usually posed as (a sequence of) optimization problems. Due to either the nonlinearity of the forward model or due to the type of regularization used, a single cost function evaluation will often require the solution of a large-scale linear system with multiple complex identity or non-identity shifts, possibly also with multiple right-hand sides. Thus, in many image reconstruction and restoration problems, the single biggest computational bottleneck to image recovery is finding is the solution of a sequence of large-scale linear systems with complex identity or non-identity shifts. In this talk, we present new, computationally efficient Krylov recycling methods designed to significantly reduce the costs of solving such systems. We illustrate the advantage of using our methods in the context of specific image reconstruction and restoration applications. This is joint work with Eric de Sturler (VA Tech) and Meghan O'Connell (Mathworks)
    • Tim Hoheisel
      McGill University
    A New Class of Matrix Support Functionals with Applications
    Link to PDF file Résumé en PDF
    Taille : 38 kb
    A new class of matrix support functionals is presented which establish a connection between optimal value functions for quadratic optimization problems, the matrix-fractional function, the pseudo matrix-fractional function, and the nuclear norm. It is therefore significant for different areas such as machine and statistical learning or inverse problems. The support function is based on the graph of the product of a matrix with its transpose. Closed form expressions for the support functional and its subdifferential are derived. In particular, the support functional is shown to be continuously differentiable on the interior of its domain, and a formula for the derivative is given when it exists.
    • Jennifer Mueller
      Colorado State University
    The direct D-bar method for dynamic EIT reconstructions of pulmonary function
    Link to PDF file Résumé en PDF
    Taille : 39 kb
    Electrical impedance tomography (EIT) is a noninvasive, non-ionizing medical imaging technique in which the conductivity of the tissue in a region of interest is computed from measurements of the current density-to-voltage mapping on the surface. The reconstruction problem is a nonlinear ill-posed inverse problem. In this talk, a direct reconstruction method will be presented that is being used for pulmonary imaging on patients of Children’s Hospital of Colorado with the ACE 1 EIT system. The direct D-bar method is a real-time 2-D reconstruction algorithm based on complex geometrical optics solutions and inverse scattering. In our study, EIT data is collected simultaneously with pulmonary function tests (PFTs), and regional and global measures of pulmonary function are computed from the images. These measures show strong correlation to the PFT output, and may provide additional clinical information in the future for diagnosis, monitoring, and treatment.