Abstract
High-dimensional problems appear naturally in various scientific areas, such as PDEs describing complex processes in computational chemistry and physics, or stochastic or parameter-dependent PDEs leading to deterministic problems with a large number of variables. Other highly visible examples are regression and classification with high-dimensional data as input and/or output in the context of learning theory. High dimensional problems cannot be solved by traditional numerical techniques, because of the so-called curse of dimensionality.
Such problems therefore amplify the need for novel theoretical and computational approaches, in order to make them, first of all, tractable and, second, offering finer and finer resolutions of relevant features. Paradoxically, increasing computational power serves to even heighten this demand. The wealth of available data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information in a high dimensional context leads to tasks that are not tractable by existing methods.
The last decade has seen the emergence of several new computational methodologies to address the above obstacles. Their common features are the nonlinearity of the solution methods as well as the ability of separating solution characteristics living on different length scales. Perhaps the most prominent examples lie in adaptive grid solvers, tensor product, sparse grid and hyperbolic wavelet approximations and model reduction approaches. These have drastically advanced the frontiers of computability for certain problem classes in numerical analysis.
This workshop deepened the understanding of the underlying mathematical concepts that drive this new evolution of computation and promoted the exchange of ideas emerging in various disciplines about the handling of multiscale and high-dimensional problems.