Speakers
Click on “abstract” for displaying and hiding individual abstracts.
Lectures are denoted by symbol ®, see program for details.
Gravitational lensing in galaxy clusters is one of the most accurate methods to probe the dark matter mass distribution inside such systems and test the \( \Lambda\mathrm{CDM} \) cosmological paradigm.
In my talk, I will present the weak lensing mass reconstruction of the galaxy cluster Abell 2744 based on Subaru [1], Magellan, and JWST data, and I will show how the results obtained, combined with accurate strong lensing modelling [2], provide a consistent picture of its total mass distribution. Being composed of multiple substructures undergoing a merging process, the complex geometry of this cluster makes it an ideal laboratory to study the accuracy of the results obtained with the two methods.
[1] Medezinski, E., et al. (2016), Frontier Fields: Subaru Weak Lensing Analysis of the merging galaxy cluster A2744, ApJ, 817, 24
[2] Bergamini, P., et al. (2022), New high-precision strong lensing modeling of Abell 2744. Preparing for JWST observations, arXiv:2207.09416v1
Weak lensing is a powerful probe of the late-time Universe and the large-scale structure. Measurements constrain the amount of matter in the Universe and the amplitude of clustering, with the combination of the two known as S8. Weak galaxy lensing surveys have consistently reported a lower amplitude for the matter fluctuation spectrum, as measured by the S8 parameter, than expected in the ΛCDM cosmology favoured by Planck. I will review how weak lensing analyses are carried out, from pixels to cosmological parameters, including the challenges faced and the state-of-the-art methods to overcome them. I will discuss the current status of tensions and possible resolutions.
[1] Weak lensing for precision cosmology - Rachel Mandelbaum arxiv.org/abs/1710.03235
[2] Weak gravitational lensing - Matthias Bartelmann, Matteo Maturi aarxiv.org/abs/1612.06535
[3] A non-linear solution to the S8 tension? - Alexandra Amon, George Efstathiou arxiv.org/abs/2206.11794
Weak lensing is a powerful probe of the late-time Universe and the large-scale structure. Measurements constrain the amount of matter in the Universe and the amplitude of clustering, with the combination of the two known as S8. Weak galaxy lensing surveys have consistently reported a lower amplitude for the matter fluctuation spectrum, as measured by the S8 parameter, than expected in the ΛCDM cosmology favoured by Planck. I will review how weak lensing analyses are carried out, from pixels to cosmological parameters, including the challenges faced and the state-of-the-art methods to overcome them. I will discuss the current status of tensions and possible resolutions.
[1] Weak lensing for precision cosmology - Rachel Mandelbaum arxiv.org/abs/1710.03235
[2] Weak gravitational lensing - Matthias Bartelmann, Matteo Maturi aarxiv.org/abs/1612.06535
[3] A non-linear solution to the S8 tension? - Alexandra Amon, George Efstathiou arxiv.org/abs/2206.11794
In this course, I will give an overview of the field of cosmological simulations of the large-scale structure of the Universe. I will summarize the main numerical algorithms employed, the techniques to generate initial conditions, and the various possible alternatives for describing dark matter. I will also discuss several issues to achieve robust and accurate results. I will finalize by describing the connection to cosmological observables and by providing an outlook for the next decade.
"Large-scale dark matter simulations", Raúl E. Angulo, Olivier Hahn - Living Reviews in Computational Astrophysics - link.springer.com/article/10.1007/s41115-021-00013-z
In the past twenty years, spectroscopic surveys have become one of the more powerful methods to map the three-dimensional matter distribution in our Universe. From these maps, we can learn about dark energy, inflation, neutrino masses and possible alternatives to general relativity. In these lectures, I will overview the process of collecting photons all the way to obtaining cosmological constraints, both for the case of galaxies as well as for the special case of Lyman-alpha forests. Most of these lectures are based on the case of the Sloan Digital Sky Survey, but most of these concepts apply for future spectroscopic surveys.
[1] "The SDSS-IV Extended Baryon Oscillation Spectroscopic Survey: Overview and Early Data" (eBOSS overview), Dawson et al., The Astronomical Journal, 151:44,2016 - arxiv.org/abs/1508.044735
[2] "Completed SDSS-IV extended Baryon Oscillation Spectroscopic Survey: Cosmological implications from two decades of spectroscopic surveys at the Apache Point Observatory" (SDSS cosmological implications), Shadab Alam et al. Phys. Rev. D 103, 083533 (2021) - arxiv.org/abs/2007.08991
In the past twenty years, spectroscopic surveys have become one of the more powerful methods to map the three-dimensional matter distribution in our Universe. From these maps, we can learn about dark energy, inflation, neutrino masses and possible alternatives to general relativity. In these lectures, I will overview the process of collecting photons all the way to obtaining cosmological constraints, both for the case of galaxies as well as for the special case of Lyman-alpha forests. Most of these lectures are based on the case of the Sloan Digital Sky Survey, but most of these concepts apply for future spectroscopic surveys.
[1] "The SDSS-IV Extended Baryon Oscillation Spectroscopic Survey: Overview and Early Data" (eBOSS overview), Dawson et al., The Astronomical Journal, 151:44,2016 - arxiv.org/abs/1508.044735
[2] "Completed SDSS-IV extended Baryon Oscillation Spectroscopic Survey: Cosmological implications from two decades of spectroscopic surveys at the Apache Point Observatory" (SDSS cosmological implications), Shadab Alam et al. Phys. Rev. D 103, 083533 (2021) - arxiv.org/abs/2007.08991
Recent observations suggest that dark matter has to be stable or, at least, very long-lived, but the presence of decay processes within the dark sector remains an open question. We study the nonlinear effects of two sub-classes of dark matter decays; one- [1] and two-body [2] decaying dark matter, models that have been claimed to resolve the well-known \( S_8 \) tension. We develop nonlinear recipes for both models and investigate constraints and \( S_8 \) tension implications of these models from recent CMB and weak lensing surveys. We show that CMB data tightly constrain one-body decays via the integrated Sachs-Wolfe effect, while weak lensing plays a dominant role in probing the late-time two-body decays.
[1] J. Bucko, S. K. Giri and A. Schneider Constraining dark matter decays with cosmic microwave background and weak lensing shear observations, accepted to A&A (2023) [\href{https://arxiv.org/abs/2211.14334}{astro-ph.CO/2211.14334}].
[2] J. Bucko, S. K. Giri and A. Schneider Nonlinear modelling of warm decays within the dark sector and implicatios on the \( S_8 \) tension, submitted to A&A (2023)
The galaxy bispectrum has become a powerful observable to probe the early universe by testing for Primordial Non-Gaussianity. I will present a reliable model for the squeezed bispectrum where the short modes are deep in the non-linear regime [1]. For this model, we exploit the non-perturbative character of the large-scale consistency relation [2] and the response function approach [3]. We write the response function for the small-scale density contrast to a long-wavelength perturbation at the field level. We propose a fitting function for the response coefficients, and we validate the model with the squeezed bispectrum measure from dark matter N-body simulations.
[1] Biagetti, Matteo and Calles, Juan and Castiblanco, Lina and Gonz\'alez, Katherine and Nore\~na, Jorge A Model for the Squeezed Bispectrum in the Non-Linear Regime, [hep-ph/2212.11940].
[2] A. Kehagias and A. Riotto,Symmetries and Consistency Relations in the Large Scale Structure of the Universe, Nucl. Phys. B 873 (2013), 514-529 [hep-ph/1302.0130].
[3] Barreira, Alexandre and Schmidt, Fabian, Responses in Large-Scale Structure, JCAP 06 (2017), 053 [hep-ph/1703.09212].
This course will be devoted to the constraints that high-energy and multi-messenger astrophysics can bring to cosmology. We will start by outlining the instrumental and observational landscape in this field. We will then discuss more specifically the applications to cosmology, insisting on the contributions of the multi-messenger approach and particularly the results obtained by crossing gravitational and electromagnetic observations of the 170817 neutron star merger. We will conclude this course by discussing several promising perspectives.
[1] "Advancing the Landscape of Multimessenger Science in the Next Decade", contribution to Snowmass 2021 (US Community Study on the Future of Particle Physics)
[2] "A gravitational-wave standard siren measurement of the Hubble constant", Abbott et al., Nature, volume 551, issue 7678, pp. 85-88 (2017), - https://arxiv.org/abs/1710.05835
[3] "A Hubble constant measurement from superluminal motion of the jet in GW170817", Hotokezaka et al., Nature Astronomy, volume 3n pp. 940-944 (2019) - arxiv.org/abs/1806.10596
[4] "The potential role of binary neutron star merger afterglows in multimessenger cosmology", Mastrogiovanni et al., A&A, volume 652, id.A1 (2021). - arxiv.org/abs/2012.12836
This course will be devoted to the constraints that high-energy and multi-messenger astrophysics can bring to cosmology. We will start by outlining the instrumental and observational landscape in this field. We will then discuss more specifically the applications to cosmology, insisting on the contributions of the multi-messenger approach and particularly the results obtained by crossing gravitational and electromagnetic observations of the 170817 neutron star merger. We will conclude this course by discussing several promising perspectives.
[1] "Advancing the Landscape of Multimessenger Science in the Next Decade", contribution to Snowmass 2021 (US Community Study on the Future of Particle Physics)
[2] "A gravitational-wave standard siren measurement of the Hubble constant", Abbott et al., Nature, volume 551, issue 7678, pp. 85-88 (2017), - https://arxiv.org/abs/1710.05835
[3] "A Hubble constant measurement from superluminal motion of the jet in GW170817", Hotokezaka et al., Nature Astronomy, volume 3n pp. 940-944 (2019) - arxiv.org/abs/1806.10596
[4] "The potential role of binary neutron star merger afterglows in multimessenger cosmology", Mastrogiovanni et al., A&A, volume 652, id.A1 (2021). - arxiv.org/abs/2012.12836
Accurately describing the relation between the dark matter overdensity and the observable galaxy field is one of the significant challenges to analyzing cosmic structures with next-generation galaxy surveys. Current galaxy bias models are either inaccurate or computationally too expensive to be used for efficient inference of small-scale information. I will present a hybrid machine learning approach, the Neural Physical Engine [1], that addresses this problem. The network architecture exploits physical information of the galaxy bias problem and is suitable for zero-shot learning within field-level inference. Furthermore, the model can efficiently generate mock halo catalogues on the scales of wide-field surveys like Euclid.
[1] Charnock, T., Lavaux, G., Wandelt, B. D., Sarma Boruah, S., Jasche, J., and Hudson, M. J. (2020). Neural physical engines for inferring the halo mass distribution function, Monthly Notices of the Royal Astronomical Society, 494(1), 50-61.
SPHEREx is a NASA Astrophysics medium Explorer mission to produce a near-infrared all-sky spectrophotometric survey. The 2-year mission will result in a spectrum for every 6-arcsecond pixel on the sky between 0.75 and 5 microns at spectral resolution varying between R=35 and 130. The mission is optimized to address three science themes :
- inflation in the early Universe through searching for imprints of non-Gaussianity on the large scale structure in the universe;
- the history of galaxy formation through measuring spectra of the extragalactic background fluctuations;
- the inventory of biogenic ices in our own Galaxy by surveying ice absorption features towards stars..
SPHEREx passed its Critical Design Review in 2021 and will be launched no later than April 2025. I will give an overview of the mission with a focus on SPHEREx extra-galactic science and present recent accomplishments by the team, developing novel simulation and analysis tools to constrain inflationary parameters. I will also discuss the remaining challenges.
[1] "Cosmology with the SPHEREX All-Sky Spectral Survey", Olivier Doré, et al., - arxiv.org/abs/1412.4872<
[2] "Science Impacts of the SPHEREx All-Sky Optical to Near-Infrared Spectral Survey: Report of a Community Workshop Examining Extragalactic, Galactic, Stellar and Planetary Science", Olivier Doré, et al., - arxiv.org/abs/1606.07039
[3] "Science Impacts of the SPHEREx All-Sky Optical to Near-Infrared Spectral Survey II: Report of a Community Workshop on the Scientific Synergies Between the SPHEREx Survey and Other Astronomy Observatories", Olivier Doré, et al. - arxiv.org/abs/1805.05489
By choosing a suitable set of cosmological parameters, one can classify them into two groups: the evolution parameters, \( \Theta_e \), which determine the amplitude of the linear matter power spectrum \( P_\mathrm{L}(k) \) at a given redshift and the shape parameters, \( \Theta_s \), which only affect its shape. [1] showed how the \( \Theta_e \) follow a perfect degeneracy with redshift so that their impact on the \( P_\mathrm{L}(k, z) \) can be mapped into a single parameter: \( \sigma_{12}(z) \), the rms linear density variance on spheres of a radius of 12 Mpc. [2] showed that this degeneracy is also present on the non-linear power spectrum up to small deviations at small scales. We show how this framework can also be extended to describe the velocity field, crucial in modelling the impact of redshift-space distortions on clustering statistics.
[1] A. G. Sánchez, Arguments against using \( h_{-1 \) Mpc units in observational cosmology}, Phys. Rev. D 102, 123511 (2020).
[2] A. G. Sánchez et al., Evolution mapping: a new approach to describe matter clustering in the non-linear regime, MNRAS 514, 4, pp. 5673-5685 (2022).
The Zwicky Transient Facility is situated at the Palomar observatory. It performs a wide, fast survey of 70% of the sky, aiming at detecting transient events up to magnitude 20. Since its start in 2018, it has detected and measured the flux of 3000 type Ia supernovae (SN Ia) in the redshift range \( 0.01< z <0.1 \). This number should rise to about 5000 by the end of the survey in 2024. SN Ia are standard candles, and give a precious distance information, provided that we can measure accurately their flux, regardless of their direction, time of appearance and redshift.
However, ground-based telescopes are affected by time-dependent effects: atmosphere transmission, mirror dusting, camera electronics gain, etc. We aim at getting a milli-magnitude calibration precision, required to avoid systematics overcoming the statistical power of this new SN Ia sample.
We present preliminary results of a specific development of the ubercal method [1], first used on the SDSS data. The method is based on using the foreground non-variable stars appearing on each image. They allow to build a large system of linear equations with \( \mathcal{O}(10^6) \) star magnitudes, \( \mathcal{O}(10^5) \) instrumental or atmospheric parameters, that fit \( \mathcal{O}(10^8) \) star observations. Powerful sparse matrix techniques and modern computer clusters make the linear system solvable in a reasonable time.
[1] Nikhil Padmanabhan et al, An Improved Photometric Calibration of the Sloan Digital Sky Survey Imaging Data, ApJ 674 1217 (2008).
This talk is based on [1] where we recompute the contribution of Inflationary gravitational waves as additional radiation in the Early Universe (\( \Delta N^{eff}_{GW} \)). Through a parametric investigation, we demonstrate that the calculation is dominated by the ultraviolet frequencies of the integral. We then realize a theoretical Monte Carlo and, working within the framework of the Effective Field Theory of inflation, we investigate the observable predictions of a very broad class of models. For each model, we solve a system of coupled differential equations whose solution completely specifies the evolution of the spectrum up to the end of inflation. We prove the calculation of \( \Delta N^{\rm eff}_{GW} \) to be remarkably model-dependent and therefore conclude that accurate analyses are needed to infer reliable information on the inflationary Universe.
[1] W. Giarè, M. Forconi, E. Di Valentino and A. Melchiorri Towards a reliable calculation of relic radiation from primordial gravitational waves, Mon. Not. Roy. Astron. Soc. Vol 520 p. 2, 2023
We investigate the evolution of cosmic voids in the Schrödinger-Poisson formalism, finding wave-mechanical solutions for the dynamics in a standard cosmological background with appropriate boundary conditions. We compare the results in this model to those obtained using the Zel'dovich approximation. We discuss the advantages of studying voids in general and the advantages of Schrödinger-Poisson description over other approaches. In particular emphasizing the utility of the free-particle approximation. We also discuss a dimensionless number, similar to the Reynolds number, for this system which allows our void solutions to be scaled to systems of different physical dimensions. [1]
[1] A. Gallagher and P. Coles, The Open Journal of Astrophysics (2022), 10.21105/astro.2208.13851.
The integrated shear 3-point correlation function \( \zeta_{\pm} \) is a higher-order cosmic shear statistic describing the correlation between the local position-dependent shear 2-point correlation function \( \xi_{\pm} \) and long-wavelength features in the cosmic shear field [1, 2]. To overcome the expensive computational cost of the model prediction, we apply a suite of neural networks to emulate the integrated bispectrum that enters into the calculation of \( \zeta_{\pm} \). We also incorporate survey and astrophysical systematic parameters by include in our modelling the systematic effects such as photometric redshift uncertainty, shear multiplicative bias and galaxy intrinsic alignment. We demostrate the accuracy of our emulation, and then also construct an efficient parameter inference pipeline using MCMC on GPU. The data covariance estimation is inspired by realistic Dark Energy Survey year3 mask, source redshift distribution and shape noise level. By this pipeline, we optimize the observational set up for \( \zeta_{\pm} \) measurement and we show a joint statistic of \( \xi_{\pm} \) and \( \zeta_{\pm} \) brings tighter parameter constraints compared to \( \xi_{\pm} \) alone at a reasonable cost for computational resources [3].
[1] A. Halder et al, The integrated three-point correlation function of cosmic shear, MNRAS. 506 (2021) 2 [astro-ph.CO/2102.10177]
[2] A. Halder, and A. Barreira, Response approach to the integrated shear 3-point correlation function: the impact of baryonic effects on small scales, MNRAS. 515 (2022) 3 [astro-ph.CO/2201.05607].
[3] Z. Gong and A. Halder et al, (2023) in preparation
This talk is based on [1] where we present a particular higher-order statistic called the integrated 3-point correlation function (\( i \)3PCF) which correlates both the weak lensing shear and projected galaxy density contrast fields, extending the work on the cosmic shear only \( i \)3PCF developed in [2,3] to also the galaxy density field. This framework is a higher-order analog of the galaxy \( \times \) shear 2-point correlation function (also called 3 \( \times \) 2PCF) employed routinely in ongoing weak lensing surveys. I will discuss how we (i) define the galaxy x shear \( i \)3PCFs and how to easily estimate them from weak lensing survey data, (ii) model them at leading order in cosmological perturbation theory and validate them against measurements from mock simulations and, (iii) show that complementing 3 \( \times \) 2PCFs with the galaxy \( \times \) shear \( i \)3PCFs can lead to tighter constraints on cosmological as well as galaxy bias parameters compared to a 3 \( \times \) 2PCF only analysis.
[1] A. Halder et al., The integrated 3-point correlation function of projected cosmic density fields: framework for a practical higher-order weak lensing and projected galaxy clustering analysis, in preparation.
[2] A. Halder, O. Friedrich, S. Seitz and T. N. Varga, The integrated three-point correlation function of cosmic shear, Monthly Notices of the Royal Astronomical Society, Volume 506, Issue 2, September 2021, Pages 2780–2803. %[https://doi.org/10.1093/mnras/stab1801].
[3] A. Halder and A. Barreira, Response approach to the integrated shear 3-point correlation function: the impact of baryonic effects on small scales, Monthly Notices of the Royal Astronomical Society, Volume 515, Issue 3, September 2022, Pages 4639–4654. %[https://doi.org/10.1093/mnras/stac2046].
The next generation of Large-Scale Structure (LSS) surveys will provide an incredible amount of cosmological data. In order to be able to optimize the amount of information one is able to extract from this data, it is necessary to study cosmological analysis using simulations. Two promising probes of LSS information are cosmic voids - underdense regions between filaments and clusters - and weak lensing - the distortion in the shape of background galaxies due to inhomogeneities in LSS. The combination of both is highly sensitive to modifications in the standard model of cosmology, such as modified gravity [1], non-standard neutrino masses [2] and time-varying dark energy [3]. In this work we show advances in the measurement and interpretation of the signature of weak-lensing signal by voids.
[1] A. Barreira, M. Cautun, B. Li, C. M. Baugh and Silvia Pascoli, Weak lensing by voids in modified lensing potentials (2015), [arXiv:1505.05809v2].
[2] E. Massara, F. Villaescusa-Navarro, M. Viel and P.M. Sutter, Voids in massive neutrino cosmologies (2015).
[3] G. Verza, A. Pisani, C. Carbone, N. Hamaus and L. Guzzo, The void size function in dynamical dark energy cosmologies (2019).
This talk is based on a recent work [1] where we infer the characteristic length scale \( r_s \) of the baryon acoustic oscillations (BAO) from low-\( z \) observations in a model-independent way and compare its value with CMB estimates providing a consistency test of the standard cosmology and its assumptions at high-\( z \). We estimate the absolute BAO scale combining angular BAO measurements and type Ia Supernovae data. Our analysis uses two different methods to connect these data sets and finds a good agreement between the low-\( z \) estimates of \( r_{s} \) with the CMB sound horizon at drag epoch, regardless of the value of the Hubble constant \( H_0 \) considered. These results highlight the robustness of the standard cosmology at the same time that they also reinforce the need for more precise cosmological observations at low-\( z \).
[1] Thais Lemos, Ruchika, Joel C. Carvalho, and Jailson Alcaniz, \em {Low-redshift estimates of the absolute scale of baryon acoustic oscillations} https://arxiv.org/abs/2303.15066.
The damping wing signature of high-redshift quasars in the intergalactic medium (IGM) [1] provides a unique way of probing the history of reionization [2,3]. We will present a Hamiltonian Monte-Carlo inference scheme that can be readily applied to observational data and that is based on realistic forward-modelling of high-redshift quasar spectra including IGM transmission and heteroscedastic observational noise. We make our scheme likelihood-free by using a normalizing flow as neural likelihood estimator. We provide a full reionization forecast for Euclid by applying our procedure to a set of mock spectra resembling the anticipated Euclid oberservations and inferring the IGM neutral fraction as a function of redshift.
[1] Miralda-Escudé, J., “Reionization of the Intergalactic Medium and the Damping Wing of the Gunn-Peterson Trough”, The Astrophysical Journal, vol. 501, no. 1, pp. 15–22, 1998. doi:10.1086/305799.
[2] Davies, F. B. et al., “Quantitative Constraints on the Reionization History from the IGM Damping Wing Signature in Two Quasars at \( z > 7 \)”, The Astrophysical Journal, vol. 864, no. 2, 2018. doi:10.3847/1538-4357/aad6dc.
[3] Ďurovčíková, D. et al., “Reionization history constraints from neural network based predictions of high-redshift quasar continua”, Monthly Notices of the Royal Astronomical Society, vol. 493, no. 3, pp. 4256–4275, 2020. doi:10.1093/mnras/staa505.
In this talk, we introduce a forward numerical model for simulating the weak-lensing magnification effect in terms of the deflection of distant tracers [1]. It is part of a broader framework aimed at reconstructing the initial conditions of the Universe and constraining cosmological parameters at the field level using a data-driven approach [2]. We wish to demonstrate that incorporating magnification enhances precision in cosmological parameter constraints and reduces systematic uncertainties in reconstructing initial conditions when combined with other probes. We emphasize the importance of new statistical methods to fully harness upcoming cosmological data and extract maximum information from large-scale structure surveys beyond standard summary statistics [3]. This work underscores the untapped potential of weak-lensing magnification in contributing to a comprehensive understanding of the Universe's composition and evolution.
[1] Chihway Chang, Bhuvnesh Jain, Delensing galaxy surveys, Monthly Notices of the Royal Astronomical Society, Volume 443, Issue 1, 1 September 2014, Pages 102–110, https://doi.org/10.1093/mnras/stu1104
[2] Lavaux Guilhem, Jasche Jens, Leclercq Florent, 2019, preprint (arXiv:1909.06396)
[3] Florent Leclercq, Alan Heavens, On the accuracy and precision of correlation functions and field-level inference in cosmology, Monthly Notices of the Royal Astronomical Society: Letters, Volume 506, Issue 1, September 2021, Pages L85–L90, https://doi.org/10.1093/mnrasl/slab081
Cosmic microwave background observations give very powerful constraints on cosmological models. They contain temperature and polarization information about the perturbations in the early-universe, and can also be used to extract information about gravitational lensing and large-scale structure. I will discuss the basic physics, what can be observed and current/planned observations, and what can be learnt from these CMB observations. Inferences about late-time cosmological parameters like the Hubble parameter are model dependent, but seem to be in conflict with some recent local measurements if the standard ΛCDM model is correct. I will review how the Hubble parameter is measured from local measurements and the CMB, and discuss the current status of tensions and possible resolutions.
[1] "Lecture notes on the physics of cosmic microwave background anisotropies”, Anthony Challinor, Hiranya Peiris. -arxiv.org/abs/0903.5158 .
[2] "Weak Gravitational Lensing of the CMB", Antony Lewis, Anthony Challinor - arxiv.org/abs/astro-ph/0601594/a.
Cosmic microwave background observations give very powerful constraints on cosmological models. They contain temperature and polarization information about the perturbations in the early-universe, and can also be used to extract information about gravitational lensing and large-scale structure. I will discuss the basic physics, what can be observed and current/planned observations, and what can be learnt from these CMB observations. Inferences about late-time cosmological parameters like the Hubble parameter are model dependent, but seem to be in conflict with some recent local measurements if the standard ΛCDM model is correct. I will review how the Hubble parameter is measured from local measurements and the CMB, and discuss the current status of tensions and possible resolutions.
[1] "Lecture notes on the physics of cosmic microwave background anisotropies”, Anthony Challinor, Hiranya Peiris. -arxiv.org/abs/0903.5158 .
[2] "Weak Gravitational Lensing of the CMB", Antony Lewis, Anthony Challinor - arxiv.org/abs/astro-ph/0601594/a.
The standard approach to inference from cosmic large-scale structure data employs summary statistics that are compared to analytic models in a Gaussian likelihood with pre-computed covariance.
To overcome the idealising assumptions about the form of the likelihood and the complexity of the data inherent to the standard approach, we investigate simulation-based inference (SBI), which learns the likelihood as a probability density parameterised by a neural network. We construct suites of simulated, exactly Gaussian-distributed data vectors for the most recent Kilo-Degree Survey (KiDS) weak gravitational lensing analysis and demonstrate that SBI recovers the full 12-dimensional KiDS posterior distribution with just under \( 10^4 \) simulations. We optimise the simulation strategy by initially covering the parameter space by a hypercube, followed by batches of actively learnt additional points.
The data compression in our SBI implementation is robust to suboptimal choices of fiducial parameter values and of data covariance. Together with a fast simulator, we show that SBI is a competitive and more versatile alternative to standard inference [1].
[1] K. Lin, M. von Wietersheim-Kramsta, B. Joachimi, S. Feeney, A simulation-based inference pipeline for cosmic shear with the Kilo-Degree Survey, arXiv preprint (2022) [arXiv:2212.04521].
The cosmic web, or Large-Scale Structure (LSS) is the massive spiderweb-like arrangement of galaxy clusters and the dark matter holding them together under gravity. The lumpy, spindly universe we see today evolved from a much smoother, infant universe. How this structure formed and the information embedded within is considered one of the “Holy Grails” of modern cosmology, and might hold the key to resolving existing “tensions” in cosmological theory.
But how do we go about linking this data to theory? Cosmological surveys are comprised of millions of pixels, which can be difficult for samplers and analytic likelihood analysis. This also poses a problem for simulation-based inference: how can we best compare simulations to observed data?
Information Maximising Neural Networks (IMNNs) [1,2] offer a way to compress massive datasets down to (asymptotically) lossless summaries that contain the same information as a full sky survey, as well as quantify the information content of an unknown distribution.
We will look at LSS assembled as a graph (or network) from discrete catalogue data, and use graph neural networks in the IMNN framework to optimally extract information about cosmological parameters (theory) from this representation [3]. We will make use of the modular graph structure as a way to open the “black box" of simulation-based inference and neural network compression to show where cosmological information is stored.
Prerequisites: We will briefly review a proof-of-concept study using information maximising neural networks (see references), and then proceed to use them as a tool for probing cosmological information as a function of graph structure.
[1] T. Charnock, G. Lavaux, and B. D. Wandelt, “Automatic physical inference with information maximizing neural networks,” Physical Review D, vol. 97, no. 8, apr 2018.
[2] T. L. Makinen, T. Charnock, J. Alsing, and B. D. Wandelt, “Lossless, scalable implicit likelihood inference for cosmological fields,” Journal of Cosmology and Astroparticle Physics, vol. 2021, no. 11, p. 049, nov 2021. [Online]. Available: https://dx.doi.org/10.1088/1475-7516/2021/11/049 This study demonstrates information saturation using information maximising neural networks in the context of cosmological fields. We provide a Google Colab code tutorial that can be used to recreate the results in the paper.
[3] T. L. Makinen, T. Charnock, P. Lemos, N. Porqueres, A. F. Heavens, and B. D. Wandelt, “The cosmic graph: Optimal information extraction from large-scale structure using catalogues,” The Open Journal of Astrophysics, vol. 5, no. 1, dec 2022. [Online]. Available: https://doi.org/10.21105/astro.2207.05202 We provide a Google Colab tutorial and accessible blog post, which can both be found here: https://tlmakinen.github.io/blog/2022/09/12/cosmicgraphs.
The higher order statistics such as the galaxy bispectrum offers non-trivial information with respect to the power spectrum, and in particular can directly probe a primordial non-Gaussian (PNG) component. With upcoming galaxy surveys set to improve PNG constraints by at least one order of magnitude, it is important to account for any potential contamination, such as the finite-volume effects. First, I will illustrate our effort to provide a full analysis pipeline for the combined power spectrum and bispectrum measurements [1]. Then, I will present an exact and efficient method to perform the bispectrum-window convolution via Hankel transform [2]. Lastly, I wil present a formulation to go beyond the flat-sky formulation for the bispectrum multipoles [3] and conclude with possible applications and future directions.
[1] F. Rizzo, C. Moretti, K. Pardede, A. Eggemeier, A. Oddo, E. Sefusatti, C. Porciani and P. Monaco, The halo bispectrum multipoles in redshift space, JCAP 01 (2023) 031 [2204.13628].
[2] K. Pardede, F. Rizzo, M. Biagetti, E. Castorina, E. Sefusatti and P. Monaco, Bispectrum-window convolution via Hankel transform, JCAP 10 (2022) 066 [2203.04174].
[3] K. Pardede, E. Di Dio and E. Castorina, Wide-angle effects in the galaxy bispectrum, [2302.12789]
The large-scale distribution of galaxies was the first means of studying cosmological metric perturbations, and continues to provide unique information on aspects of the cosmological model and the assembly of structure. The field is poised for a major advance, as the DESI and Euclid projects will together provide redshift catalogues perhaps 30 times larger than those presently available. These lectures will give an overview of the main science goals for these surveys, summarizing the main statistical tools and the theory used to relate galaxies to the distribution of dark matter. Emphasis will be given to the role of large-scale structure in providing complementary information to that from the Cosmic Microwave background and other probes, breaking degeneracies and giving an independent view of cosmological ‘tensions’ – inconsistencies between different estimates of key cosmological parameters.
[1] https://www.roe.ac.uk/japwww/teaching/cos5_1213/cos5_full.pdf
[2] https://arxiv.org/pdf/astro-ph/0206508
The large-scale distribution of galaxies was the first means of studying cosmological metric perturbations, and continues to provide unique information on aspects of the cosmological model and the assembly of structure. The field is poised for a major advance, as the DESI and Euclid projects will together provide redshift catalogues perhaps 30 times larger than those presently available. These lectures will give an overview of the main science goals for these surveys, summarizing the main statistical tools and the theory used to relate galaxies to the distribution of dark matter. Emphasis will be given to the role of large-scale structure in providing complementary information to that from the Cosmic Microwave background and other probes, breaking degeneracies and giving an independent view of cosmological ‘tensions’ – inconsistencies between different estimates of key cosmological parameters.
[1] https://www.roe.ac.uk/japwww/teaching/cos5_1213/cos5_full.pdf
[2] https://arxiv.org/pdf/astro-ph/0206508
This talk is based on [1]. As the next generation of large galaxy surveys come online, it is becoming increasingly important to develop and understand the machine learning tools that analyze big astronomical data. Neural networks are powerful and capable of probing deep patterns in data, but must be trained carefully on large and representative data sets. We present a new `hump' of the Cosmology and Astrophysics with MachinE Learning Simulations (CAMELS) project [2]: CAMELS-SAM, encompassing one thousand dark-matter only simulations of (100 \( h^{-1} \) cMpc)\( ^3 \) with different cosmological parameters (\( \Omega_m \) and \( \sigma_8 \)) and run through the Santa Cruz semi-analytic model for galaxy formation over a broad range of astrophysical parameters. As a proof-of-concept for the power of this vast suite of simulated galaxies in a large volume and broad parameter space, we probe the power of simple clustering summary statistics to marginalize over astrophysics and constrain cosmology using neural networks. We use the two-point correlation function, count-in-cells, and the Void Probability Function, and probe non-linear and linear scales across \( 0.68< \) R \( <27\ h^{-1} \) cMpc. We find our neural networks can both marginalize over the uncertainties in astrophysics to constrain cosmology to 3-8% error across various types of galaxy selections, while simultaneously learning about the SC-SAM astrophysical parameters. This work encompasses vital first steps toward creating algorithms able to marginalize over the uncertainties in our galaxy formation models and measure the underlying cosmology of our universe. CAMELS-SAM has been publicly released alongside the rest of CAMELS [3], and offers great potential to many applications of machine learning in astrophysics: \textit{https://camels-sam.readthedocs.io
[1] L. A. Perez, S. Genel, F. Villaescusa-Navarro, et al., CAMELS-SAM M. Anselmino, A. Efremov and E. Leader, Constraining cosmology with machine learning and galaxy clustering: the CAMELS-SAM suite, (2022) [arXiv:2204.02408].
[2] F. Villaescusa-Navarro, D. Anglés-Alcázar, S. Genel, et al., The CAMELS project: Cosmology and Astrophysics with Machine-learning Simulations The Astrophysical Journal, Vol. 915 (2021).
[3] F. Villaescusa-Navarro, S. Genel, D. Anglés-Alcázar, L. A. Perez, et al., The CAMELS project: public data release (2022), [arXiv:2201.01300].
Traditionally, large-scale structure surveys aim to detect individual galaxies in three dimensions. This involves measuring the redshift of each galaxy as well as its angular position on the sky, and then creating a catalog and a corresponding 3D map. This procedure has been routinely used by optical galaxy surveys like SDSS and has led to constraints on dark energy, gravity, and the initial conditions of the Universe. An alternative proposal is to map the large-scale structure of the Universe using the redshifted 21cm line from the spin flip transition in neutral hydrogen (HI) with radio telescopes. The HI Intensity Mapping technique does not require the often difficult and expensive detection of individual galaxies. Instead, it maps the entire Hi flux coming from many galaxies together in large 3D pixels, across the sky and along time. In my lectures I will introduce the HI Intensity Mapping technique and its applications to cosmology and galaxy evolution studies. I will present MeerKAT and the SKA, and their synergies with optical galaxy surveys like Euclid. I will also introduce a couple of useful open-source codes and provide pedagogical Jupyter notebooks for further study.
[1] "Unveiling the Universe with Emerging Cosmological Probes" (Section 3.8), Moresco et al. arxiv.org/abs/2201.07241 .
[2] "Baryon Acoustic Oscillation Intensity Mapping as a Test of Dark Energy", Chang et al - arxiv.org/abs/0709.3672.
[3] "Measurement of 21 cm brightness fluctuations at z ∼ 0.8 in cross-correlation", Masui et al. - arxiv.org/abs/1208.0331.
In this talk I will present my work as a rotation student of professor Aaron Roodman's group in the LSST camera at SLAC/Stanford. I demonstrated through a profile examination of stacked images that the intra-CCD cross-talk is not linear under changes on the flux. This was tested on the ITL CCD at the TS7 crosstalk projector. Understanding this effect will be fundamental for the LSST camera due to the future presence of satellites constellations that will impact the read-out and data analysis in the future years.
[1] Adam Snyder, et al., Laboratory Measurements of Instrumental Signatures of the LSST Camera Focal Plane, arXiv:2101.01281 [astro-ph.IM]
The thermal Sunyaev-Zel’dovich effect refers to a spectral distortion in the cosmic microwave background (CMB) generated from the inverse-Compton scattering of CMB photons off free, energetic electrons in the Universe. It is primarily sourced by the electrons in the intracluster medium with smaller contributions from the intergalactic medium and the epoch of reionization. The amplitude of its monopole signal is a unique probe of the total thermal energy contained in the electrons with an upper limit \( |\langle y \rangle| < 15 \times 10^{-6} \) set by the measurements from COBE-FIRAS [1]. In this talk, I will discuss current work on the re-analysis of FIRAS CMB monopole data to constrain the all-sky Compton-\( y \) distortion. I will also present the first calculation of a tSZ-like distortion in the cosmic infrared background (CIB) [2]. CIB photons originating from the thermal dust emission in star-forming galaxies are expected to similarly inverse-Compton scatter. Using a halo model approach, we find that the distortion in the CIB monopole spectrum has a positive (negative) peak amplitude of 4 Jy/sr (-5 Jy/sr) at 2260 GHz (940 GHz) and two null frequencies, at approximately 196 GHz and 1490 GHz. In addition to being a similar probe of the intervening free electrons, this signal would provide new insight into the star formation history of the Universe. \end{abstract}
[1] D. J. Fixsen et al. The Cosmic Microwave Background Spectrum from the Full COBE* FIRAS Data Set (1996), ApJ, 473, 576, DOI: \href{https://iopscience.iop.org/article/10.1086/178173}{10.1086/178173}
[2] A. Sabyr, J. C. Hill, and B. Bolliet, Inverse-Compton scattering of the cosmic infrared background (2022), Phys. Rev. D 106, 023529, DOI: \href{https://doi.org/10.1103/PhysRevD.106.023529}{10.1103/PhysRevD.106.023529}
The weak gravitation lensing of the Cosmic Microwave Background (CMB) [1] rows a wealth of information about the late-time universe in the CMB data we observe through ground-based and space-based telescopes. In this talk, I propose a method to probe Galaxy-cluster mass profiles from the lensing signature of CMB in arcmin scales. In the first part, I describe how a theoretical halo model [2] for a cluster gives rise to lensing signatures in the observed CMB. In the second part, I discuss how we are developing a method based on Maximum a posterior (MAP) estimator [3] of lensing potential to recover the cluster mass. Such an estimator will be influential in light of low noise level experiments like CMB S4.
[1] A. Lewis and A. Challinor, Weak gravitational lensing of the CMB, Phys. Rept. 429 (2006) 1 [astro-ph/0601594].
[2] J.F. Navarro, C.S. Frenk and S.D.M. White, The Structure of cold dark matter halos, Astrophys. J. 462 (1996) 563 [astro-ph/9508025].
[3] J. Carron and A. Lewis, Maximum a posteriori CMB lensing reconstruction, Phys. Rev. D 96 (2017) 063510 [1704.08230].
Given the exponential growth on the upcoming supernovae data available, the possibilities of rigorously testing the cosmological principle becomes ever more real. One of the ways to do so, is by measuring the multipole decomposition of the Hubble and deceleration parameters [1,2]. In this talk, I will show how the theoretical possibilities to define the latter are not unique, discuss the physical differences between them and which one we should use when analyzing data.
[1] P. K. Aluri et al. Is the Observable Universe Consistent with the Cosmological Principle? arXiv:2207.05765 [astro-ph.CO]
[2] G. Domènech, R. Mohayaee, S. P. Patil and S. Sarkar, Galaxy number-count dipole and superhorizon fluctuations, JCAP 10 (2022) 019 DOI:10.1088/1475-7516/2022/10/019
We introduce BEoRN (Bubbles during the Epoch of Reionisation Numerical simulator), a publicly available algorithm, designed to produce 3D maps of the 21cm signal during the epoch of cosmic dawn and reionisation. The code populates dark matter halos with galaxies, and parametrise their spectra in a flexible way. It computes the evolution of 1D temperature, Lyman-alpha flux and ionisation fraction profiles around sources, and paints these profiles on 3D grids. It deals consistently with the overlap of ionised bubbles and ensure photon conservation. We present the code, how to use it, and compare its prediction with other work in the literature.
The Concordance Model of Cosmology is about to be tested with unprecedented precision through a series of next-generation galaxy surveys. An accurate estimate of the data covariance matrix, including contributions from nonlinear evolution, is essential for accurate inference of the cosmological parameters. Amongst these contributions is the Super-Sample Covariance [1], a form of sample variance arising from the nonlinear coupling between Fourier modes of the density field; this covariance term is expected to be the dominant source of statistical [2] error beyond the usual Gaussian covariance. We assess the impact of this effect on Euclid photometric survey through a Fisher Matrix analysis, finding a large increase in the forecasted uncertainties, especially for Weak Lensing Cosmic Shear.
[1] Masahiro Takada, Wayne Hu, Power spectrum super-sample covariance, Physical Review D 1, American Physical Society ({APS}), 2013
[2] Alexandre Barreira, Elisabeth Krause, Fabian Schmidt, Accurate cosmic shear errors: do we need ensembles of simulations?, Journal of Cosmology and Astroparticle Physics, {IOP} Publishing, 2018
I will introduce the main observables in spectroscopic redshift surveys: the 2-point correlation function and its Fourier-space counterpart, the power spectrum. I will provide an overview of their theoretical modelling in Eulerian perturbation theory highlighting how information on the cosmological model can be extracted from their most relevant features. I will mention how and what additional information could be provided by the analysis of higher-order correlation functions.
[1] "Large-scale structure of the Universe and cosmological perturbation theory", Bernardeau at al., Phys. Rep. 367 (2002) 1–248. arXiv:astro-ph/0112551.
[2] "Large-scale galaxy bias", Desjacques at al., Phys. Rep. 733 (2018) 1-193 doi:10.1086/519947 - arXiv:1611.097874.
I will introduce the main observables in spectroscopic redshift surveys: the 2-point correlation function and its Fourier-space counterpart, the power spectrum. I will provide an overview of their theoretical modelling in Eulerian perturbation theory highlighting how information on the cosmological model can be extracted from their most relevant features. I will mention how and what additional information could be provided by the analysis of higher-order correlation functions.
[1] "Large-scale structure of the Universe and cosmological perturbation theory", Bernardeau at al., Phys. Rep. 367 (2002) 1–248. arXiv:astro-ph/0112551.
[2] "Large-scale galaxy bias", Desjacques at al., Phys. Rep. 733 (2018) 1-193 doi:10.1086/519947 - arXiv:1611.097874.
Our ability to extract cosmological information from galaxy surveys is limited by uncertainties in the galaxy-dark matter halo relationship for a given galaxy population, which is determined by the intricacies of galaxy formation. To quantify these uncertainties, we examine quenched and star-forming galaxies using an empirical semi-analytic model [1] and a hydrodynamical model [2]. Building on [3], we fit a perturbative bias expansion to the density fields of these galaxies, enabling direct comparison between distinct modeling approaches and providing an understanding of how uncertainties in galaxy formation physics and the galaxy-halo connection impact the parameters of this bias expansion. This study will be used in precision cosmology analyses as a way to inform priors on these parameters, an ingredient that is essential for getting the most out of current and future data from large galaxy surveys.
[1] Behroozi, P., Wechsler, R. H., Hearin, A. P., & Conroy, C. 2019, MNRAS, 488, 3143, doi: 10.1093/mnras/stz1182
[2] Nelson, D., Springel, V., Pillepich, A., et al. 2021, The IllustrisTNG Simulations: Public Data Release. https://arxiv.org/abs/1812.05609
[3] Kokron, N., DeRose, J., Chen, S.-F., White, M., & Wechsler, R. H. 2022, Monthly Notices of the Royal Astronomical Society, 514, 2198, doi: 10.1093/mnras/stac1420
In this talk, I will present the paradigm of the effective field theory of large-scale structures (EFTofLSS) and how it can be used to constrain cosmological models. First I will discuss the consistency of this theory and its predictive power [1], and then I will present the constraints of the EFTofLSS applied to BOSS and eBOSS data on the \( \Lambda \)CDM model [2] as well as on some alternative models that allow to resolve the cosmological tensions [3].
[1] T. Simon, P. Zhang, V. Poulin and T. L. Smith, On the consistency of effective field theory analyses of BOSS power spectrum, [astro-ph.CO/2208.05929].
[2] T. Simon, P. Zhang, and V. Poulin, Cosmological inference from the EFTofLSS: the eBOSS QSO full-shape analysis, [astro-ph.CO/2210.14931].
[3] T. Simon, G. F. Abell\'an, P. Du, V. Poulin and Y. Tsai, Constraining decaying dark matter with BOSS data and the effective field theory of large-scale structures, Phys. Rev. D 106 (2022) 023516, [astro-ph.CO/2203.07440].
This talk is based on [1] in which we determine the dipole in the Pantheon+ data. We find that, while its amplitude roughly agrees with the dipole found in the cosmic microwave background which is attributed to the motion of the solar system with respect to the cosmic rest frame, the direction is different at very high significance. While the amplitude depends on the lower redshift cutoff, the direction is quite stable. For redshift cuts of order \( z_{\rm cut} \simeq 0.05 \) and higher, the dipole is no longer detected with high statistical significant. An important rôle seems to be played by the redshift corrections for peculiar velocities.
[1] F. Sorrenti, R. Durrer and M. Kunz, The Dipole of the Pantheon+SH0ES Data, (2022) e-Print:2212.10328 [astro-ph.CO].arXiv:2212.10328
Neutrinos are ubiquitous in cosmology and play a significant rule throughout the history of the Universe. As a result, cosmological observations offer a unique opportunity to test the properties of neutrinos. Standard model neutrinos are expected to freestream ever since they decouple from the primordial plasma in the early Universe. There are however multiple feasible particle physics scenarios in which neutrinos can interact efficiently at (much) later times. In this talk, I explore these scenarios and discuss how they can be constrained using CMB and LSS observables. I demonstrate that there is a redshift window in which neutrinos cannot interact significantly given the Planck CMB data. Finally, I discuss how the constraints can improve with future CMB Stage-IV and galaxy clustering data. The talk is based on [1].
[1] P.~Taule, M.~Garny and M.~Escudero, Global view of neutrino interactions in cosmology: The free streaming window as seen by Planck, Phys. Rev. D 106 (2022) no.6, 063539 [arXiv:2207.04062 [astro-ph.CO]].
Modern cosmological inference typically relies on likelihood expressions and covariance estimations, which can become inaccurate and cumbersome depending on the scales and summary statistics considered. Simulation-based inference [1], in contrast, does not require an analytical form for the likelihood but only a prior and a simulator, whereby these issues are naturally circumvented. In this talk, we will explore how this technique can be used to infer \( \sigma_8 \) from a forward model based on Lagrangian Perturbation Theory [2] and the bias expansion. The power spectrum and the bispectrum are used as summary statistics to obtain the posterior of the cosmological, bias and noise parameters via neural density estimation [3].
[1] J.-M. Lueckmann, J. Boelts, D. S. Greenberg, P. J. Gonçalves, and J. H. Macke, Benchmarking Simulation-Based Inference, arXiv e-prints (2021) [arxiv.org/abs/2101.04653].
[2] F. Schmidt, An n-th order Lagrangian forward model for large-scale structure, Journal of Cosmology and Astroparticle Physics, 04 (2021), 033 [arxiv.org/abs/2012.09837].
[3] A. Tejero-Cantero et al., sbi: A toolkit for simulation-based inference, Journal of Open Source Software, 5(52), 2505 (2020) [arxiv.org/abs/2007.09114].
In the \( f(R) \)-gravity model the standard Einstein Hilbert action is modified by a nonlinear function, \( f(R) \), which leads to scale-dependent structure growth of matter [1]. The scale-dependent growth in this model results in a different abundance of galaxy clusters compared to general relativity. Thus, studying cluster and weak lensing data can be used to constrain \( f(R) \)-gravity models [2]. In our analysis, we use the Hu-Sawicki model for \( f(R) \)-gravity for which a semi-analytical halo mass function exists and is used for finding the cluster abundance [3]. We forecast constraints on the \( f(R) \) parameter for the existing SPTxDES and futuristic CMB-S4xEuclid data set.
[1] H.~A. Buchdahl, Non-linear Lagrangians and cosmological theory, 1970 MNRAS, 150, 1 [10.1093/mnras/150.1.1].
[2] M. Cataneo et al, New constraints on \( f(R) \) gravity from clusters of galaxies, 2015, Phys. Rev. D, 92, 044009, [10.1103/PhysRevD.92.044009]
[3] L. Lombriser, Modeling halo mass functions in chameleon f(R) gravity, 2013, Phys. Rev. D, 87, 123511 [10.1103/PhysRevD.87.123511]
Statistical Cosmology is the subject that studies how we can extract cosmological information by confronting cosmological models with data. Recent breakthroughs in Machine Learning have opened up a vast new playing field where we find the solutions to many problems that were previously thought to be intractable. I will explain the powerful principles that underlie the current transformation of Statistical Cosmology and sketch the new approach to physical cosmology that they enable. My focus will be on the central problems of modern cosmology: the cosmic beginning and the dark universe. As these new powers propel us forward, new challenges come into view. What will be the ultimate limit to cosmological knowledge? How do we deal with imperfections in our physical models when every detail can be tested against data?
[1] "Generalized massive optimal data compression". arxiv.org/abs/1712.00012.
[2] "Solving high-dimensional parameter inference: marginal posterior densities & Moment Networks". - arxiv.org/abs/2011.05991.
[3] "Fast likelihood-free cosmology with neural density estimators and active learning".
[4] "Automatic physical inference with information maximising neural networks". - arxiv.org/abs/1903.00007.
[5] "Lossless, Scalable Implicit Likelihood Inference for Cosmological Fields". - arxiv.org/abs/2107.07405. This paper comes with a Google Colab notebook that provides a detailed guide to reproducing the analysis in the paper.
[6] "The Cosmic Graph: Optimal Information Extraction from Large-Scale Structure using Catalogues". - arxiv.org/abs/2207.05202.
Statistical Cosmology is the subject that studies how we can extract cosmological information by confronting cosmological models with data. Recent breakthroughs in Machine Learning have opened up a vast new playing field where we find the solutions to many problems that were previously thought to be intractable. I will explain the powerful principles that underlie the current transformation of Statistical Cosmology and sketch the new approach to physical cosmology that they enable. My focus will be on the central problems of modern cosmology: the cosmic beginning and the dark universe. As these new powers propel us forward, new challenges come into view. What will be the ultimate limit to cosmological knowledge? How do we deal with imperfections in our physical models when every detail can be tested against data?
[1] "Generalized massive optimal data compression". arxiv.org/abs/1712.00012.
[2] "Solving high-dimensional parameter inference: marginal posterior densities & Moment Networks". - arxiv.org/abs/2011.05991.
[3] "Fast likelihood-free cosmology with neural density estimators and active learning".
[4] "Automatic physical inference with information maximising neural networks". - arxiv.org/abs/1903.00007.
[5] "Lossless, Scalable Implicit Likelihood Inference for Cosmological Fields". - arxiv.org/abs/2107.07405. This paper comes with a Google Colab notebook that provides a detailed guide to reproducing the analysis in the paper.
[6] "The Cosmic Graph: Optimal Information Extraction from Large-Scale Structure using Catalogues". - arxiv.org/abs/2207.05202.
The presence of backreaction effects is investigated in the spherically symmetric Lemaître-Tolman-Bondi dust model and its generalisation including pressure, the Lemaître model. An averaging scheme based on a weighting by rest mass is applied. The mass-weighted effective scale factor derived from the inhomogeneous model is found to obey a modified Friedmann equation involving a coefficient that depends on the two-point density correlation function. This coefficient amplifies the matter contribution to the effective expansion rate and thus induces a 'missing mass' effect, similar to the heuristic approach undertaken in [1] in the Newtonian setting. In addition, pressure gradients which can arise from structure formation are found to be able to induce local acceleration, similar to a local dark energy. Inhomogeneities and pressure gradients produce deviations from a Friedmann evolution even in the Newtonian limit, meaning that both effects on the expansion could be non-relativistic, which is an uncommon feature in backreaction scenarios, such as in the Buchert formalism.
[1] Tremblin, P., Chabrier, G., Padioleau, T., & Daley-Yates, S. 2022 Nonideal self-gravity and cosmology: Importance of correlations in the dynamics of the large-scale structures of the Universe, Astronomy & Astrophysics, 659, A108
One of the most important open problems in cosmology is understanding the reionization of the intergalactic medium (IGM) by the first luminous sources. Measurable signatures of reionization include the thermal state of the IGM at \( z > 5 \) since ionization fronts inject heat as they propagate through [1] and the mean free path of ionizing photons (\( \lambda_{mfp} \)) which rapidly increases at the end of reionization as bubbles of reionized gas merge [2]. The Lyman-\( \alpha \) (Ly\( \alpha \)) forest in quasar spectra is a powerful tool to probe these signals of reionization. This talk is based on [3] where we showed that the Ly\( \alpha \) forest flux auto-correlation function is sensitive to \( \lambda_{mfp} \) and found that mock data of the Ly\( \alpha \) forest auto-correlation function was highly non-Gaussian, suggesting a need for simulation based inference. I will discuss this as well as our measurement of the auto-correlation function from the XQR-30 data set.
[1] Elisa Boera et al. “Revealing Reionization with the Thermal History of the Intergalactic Medium: New Constraints from the Ly\( \alpha \) Flux Power Spectrum” ApJ (2019) https://doi.org/10.3847/1538-4357/aafee4
[2] George D Becker et al. “The mean free path of ionizing photons at \( 5 < z < 6 \): evidence for rapid evolution near reionization” MNRAS (2021) https://doi.org/10.1093/mnras/stab2696
[3] Molly Wolfson et al. “Forecasting constraints on the mean free path of ionizing photons at \( z \geq 5.4 \) from the Lyman-\( \alpha \) forest flux auto-correlation function” (2022) arXiv preprint arXiv:2208.09013
Full-field cosmological inference can achieve information-theoretical optimal constraints on cosmological parameters, by exploiting information beyond the two-point functions. This can be implemented with Simulation-Based Inference which uses simulations to directly approximate the posterior density of the model parameters, but which can be costly when simulations are time-consuming. In this work, we propose a novel way of using differentiable simulators to accelerate the convergence of the method [1,2]. We demonstrate our method by infering \( \Omega_c \) and \( \sigma_8 \) from simulated log-normal weak lensing mass maps, and show that we can recover precise posteriors, and that integrating the simulator gradients reduces the number of simulations needed.
[1] Brehmer, J., Louppe, G., Pavez , J. & Cranmer, K. Mining gold from implicit models to improve likelihood-free inference. Proceedings Of The National Academy Of Sciences. 117, 5242-5249 (2020,2), https://doi.org/10.1073%252Fpnas.1915980117
[2] Zeghal, J., Lanusse, F., Boucaud, A., Remy, B. & Aubourg, E. Neural Posterior Estimation with Differentiable Simulators. (arXiv,2022), https://arxiv.org/abs/2207.05636