|Abstract:|| Climate models are among the most used tools for impact assessment and policy making, and they are instrumental to obtain physically consistent projections of future climate. While it is of crucial importance to assess the sensitivity of such models to different input such as scenarios and parametrizations, a comprehensive analysis is not feasible due to computational and memory constraints.
In this talk I will present new classes of global space-time statistical models (emulators) that can provide useful stochastic approximations to climate models. Emulators trained on a small set of model runs allow to perform sensitivity analysis with surrogate runs that are orders of magnitude faster to compute than the original climate model.
I will also discuss some of my methodological research on global space-time statistical models, spanning from nonstationary Gaussian processes in the sphere x time domain that can account for land/ocean regimes and vertical profiles of temperatures, to skew-t processes for daily data. The large dimensionality of the model output, comprising of hundreds of millions data points, poses significant computational challenges, and in this talk I will also introduce a new sequential likelihood approximation that will allow
scalable, parallelizable and asymptotically consistent inference for massive datasets.