A search for dark matter among Fermi-LAT unidentified sources with systematic features in Machine Learning

Written by Viviana Gammaldi.

Summary of the paper with the same title published in MNRAS.

arXiv: 2207.09307

The recent 4FGL Fermi-LAT catalogue, the result of 8 years of telescope operation, is a collection of sources with associated gamma-ray spectra, containing important information about their nature. As shown in Fig. 1, somehow surprisingly, an important fraction of objects in the Fermi-LAT catalogs, ca. 1/3 of the total, remain as unidentified (unIDs), i.e., objects lacking a clear single association to a known object identified at other wavelengths, or to a well-known spectral type emitting only in gamma rays, e.g. certain pulsars. Indeed, there is the exciting possibility that some of these sources could be a DM signal. Among other prospective sources of gamma rays from DM annihilation events, dark satellites or subhalos in the Milky Way, with no optical counterparts, are the preferred candidates, as they are expected to exist in high number according to standard cosmology and they would not be massive enough to retain gas/stars. Further, main galaxies in local Universe, e.g. dwarf irregular galaxies, may also represent good candidates for unIDs.

Fig. 1: Fermi-LAT detected sources.

We propose a new approach to solve the standard, Machine Learning (ML) binary classification problem of disentangling prospective DM sources (simulated data) from astrophysical sources (observed data) among the unIDs of the 4FGL Fermi-LAT catalogue.

In particular, we are interested in one of the parametrizations of the gamma-ray spectrum used in the 4FGL, known as the Log-Parabola (LP), which allow us to identify different astrophysical sources of gamma rays by means of at least two parameters, the emission peak ,Epeak, and the spectral curvature, beta. Indeed, we introduce the DM sample in the parameter space by fitting the simulated DM gamma-ray spectrum with the same LP functional form (Fig. 2, left panel). Furthermore, we artificially build two systematic features for the DM data which are originally inherent to observed data: the detection significance and the relative uncertainty on the spectral curvature, beta_rel. We do it by sampling from the observed population of unIDs, assuming that the DM distributions would, if any, follow the latter. In Fig. 2 we show the parameter space without the uncertainty on beta (left panel) and by including the uncertainty on beta, created for the DM sample as systematic feature.

Fig. 2: beta-Epeak parameter space. Left panel: Astrophysical (yellow), DM (magenta) and unIDs (red) sources are shown. Right panel: Same as left panel, but including the uncertainty on beta for the training/test set (grey data) and the unIDs sources to be classified (red data point).

Finally, we consider different ML models for the classification task: Logistic Regression, Neural Network (NN), Naive Bayes and Gaussian Process, out of which the best, in terms of classification accuracy, is the NN, achieving around 93% performance. Applying the NN to the unIDs sample, we find that the degeneracy between some astrophysical and DM sources (visible as overlapping region in Fig. 2) can be partially solved within by including systematic features in the classification task (Fig. 3). Nonetheless, due to strong statistical fluctuations, we conclude that there are no DM source candidates among the pool of 4FGL Fermi-LAT unIDs.

Fig. 3: Probability for each unIDs to be a DM source. Left panel: results adopting only two feature beta-Epeak for classification. Right panel: results for the four-features (beta, Epeak, sigma, beta_rel) classification.

Further details can be found in https://doi.org/10.1093/mnras/stad066 .

IFT researcher Miguel A Sánchez-Conde, Deputy Scientific Coordinator of the entire Fermi-LAT Collaboration

IFT researcher Miguel A Sánchez-Conde has been recently appointed Deputy Scientific Coordinator of the entire Fermi-LAT Collaboration that operates the gamma-ray Large Area Telescope on board the Fermi NASA satellite.

This is the highest science responsibility position inside the Collaboration and, as such, Sánchez-Conde will be in charge of coordinating and deciding upon the science to be made by this NASA mission in the recent future. The appointment is for two years, which includes a first year as deputy coordinator and a second year as principal coordinator.

Launched on June 11, 2008, the NASA Fermi Gamma-ray Space Telescope observes the cosmos using the highest-energy form of light. After almost 15 years of operations, Fermi LAT has revolutionized our vision of the gamma-ray universe. Indeed, Fermi, that maps the entire sky every three hours, provides a unique window into the most extreme phenomena of the universe, from gamma-ray bursts and black-hole jets to pulsars, supernova remnants, the origin of cosmic rays and the nature of the dark matter. 

Today, Fermi-LAT keeps leading the search of potential dark matter signals from space. A significant fraction of this search effort is being pursued at the IFT thanks to Sánchez-Conde — P.I. of the UAM node — and his group.

Our congratulations to our colleague for this important appointment!

[original web article can be found here]

Dark Matter search in dwarf irregular galaxies with the Fermi Large Area Telescope

Written by Viviana Gammaldi.

Summary of the paper with the same title published in PRD.

arXiv: 2109.11291

Almost a century ago, first astrophysical hints came to light pointing to the existence of new physics: the gravitational interaction was failed to describe the kinematics of extragalactic objects. The gap between theory and observation could be filled both on astrophysical and cosmological scale, by assuming the existence of a new kind of non-luminous, yet gravitationally-interacting matter, i.e. the Dark Matter (DM). Currently, the abundance of DM has been estimated to be the 27% of the total content of the Universe, although its nature remains still unknown. Among other, the Weakly Interactive Massive Particle (WIMP) represents one plausible candidate beyond the Standard Model (SM) of particle physics. WIMPs particle can annihilate in astrophysical objects producing SM particles, whose decay processes generate secondary fluxes of gamma rays, among others detectable fluxes. Searching for DM hints in secondary fluxes produced in astrophysical sources is what we call indirect searches for DM.

Recently, dwarf irregular galaxies have been proposed as astrophysical targets of interest for indirect searches of DM. In fact, they are DM dominated astrophysical objects, with a negligible astrophysical background in gamma-rays. Nonetheless, dwarf irregular galaxies also represent an interesting laboratory from the point of view of the DM distribution. In fact, although the rotation curves of spiral galaxies have been the first observational evidence of the existence of DM, there is still a lot to learn about the distribution of DM in galaxies. From the benchmark LCDM cosmology and N-body DM-only simulation (N-body DM-only simulation are a good approximation due to the abundance of DM in the Universe – 27% of the total content – with respecto to the visible baryon matter, i.e. stars, gas, etc. – which represent only the 5% of the total content of the Universe), we know that the DM distribution generally follows the well-known Navarro-Frenk-White (NFW) profile, i.e. a profile with a “cusp” in the center of big structures. On the other hand, the experimental data of the rotation curves of smaller objects, point to “core” DM distribution profile. The reason of such a discrepancy is still unknown, and represent the so called “cusp/core problem”, an open question in physics, astrophysics and particle physics. Dwarf irregular galaxies are indeed an example of the objects where the cusp/core problem is observed.

In this work we remain agnostic about the cusp/core problem: we analysed the rotation curves of 7 dwarf irregular galaxies (see e.g. the rotation curve of the IC10 galaxy, Fig. 1).

The Burkert core profile (full blue line) corresponds to the best fit of the rotation curves. Instead, the NFW cusp profile has been theoretically constructed, and the corresponding rotation curve has been created and compared with the data. With these two different DM distributions, we have constructed the DM modelling, including the enhancement in the expected gamma-ray flux due to the existence of substructures. Substructures in DM halo are over density, sub clumps in the DM distribution profiles, relics of the bottom-up formation history of the galaxy. Their effect is to enhance the DM annihilation rate, i.e. the gamma-ray production, due to the enhance of the DM density locally. We have created the two-dimensional spacial templete for our targets. In Fig. 2 the spatial template of the IC10 galaxy is shown for four different DM density distribution profiles. Indeed, the MIN model corresponds to a Burket core profile without any substructure, the MED model is the Burkert profile with a medium contribution from substructures. The maximum contribution from substructures has been calculated for both the Burkert core and the cusp NFW profile (respectively, Bur-MAX and NFW-MAX).

Finally, we have analysed the data of the Fermi Large Area Telescope. We didn’t find any gamma-ray signal from these objects, indeed we can exclude some region of the DM mass and annihilation cross section parameter space. The exclusion limit for the DM mass and annihilation cross-section are presented in Fig. 3. The blue line corresponds to the results of this work for the combined analysis of all the 7 targets with a MED model, i.e. considered as extended sources. These results are compared with the thermally averaged annihilation cross section (dotted black line), the result of the previous proof-of-concepts paper (yellow dashed line), the results of 100 simulation with a null signal from DM (yellow band) and the exclusion limit from well known dwarf spheroidal galaxy obtained with Fermi-LAT (green dashed line). The results of our study are dominated by the constraints obtained by IC10 and NGC6822, and dimly depend on the considered DM profile.

ΛCDM halo substructure properties revealed with high resolution and large volume cosmological simulations

Written by Angie Moliné.

Summary of the paper with the same title submitted to MNRAS.

arXiv: 2110.02097

In the current standard model of cosmology, ΛCDM, the structure of the Universe is formed via a hierarchical, bottom-up scenario with small primordial density perturbations growing to the point where they collapse into the filaments, walls and eventually dark matter (DM) haloes that form the underlying large-scale-structure filamentary web of the Universe. Galaxies are embedded in these massive, extended DM haloes teeming with self-bound substructure, the so-called subhaloes.

The study of the statistical and structural properties of the subhalo population is of prime importance because subhaloes represent important probes of the mass accretion history and dynamics of host haloes and accordingly, of the underlying cosmological model. In addition to representing a cosmological test by themselves, understanding both the statistical and structural properties of subhaloes plays a key role for many other diverse studies, such as gravitational lensing, stellar streams and indirect or direct DM detection experiments.

Studying the complicated dynamics of these subhaloes within their hosts requires numerical simulations, which have proven to be crucial for understanding structure formation in the Universe. By making use of data at different cosmic times from the Phi-4096 and Uchuu suite of high-resolution N-body cosmological simulations, in this work we improve upon previous studies aimed at characterize the subhalo population. More precisely, the superb numerical resolution and halo statistics of these simulations allow for a careful and dedicated study – for the first time consistently over more than seven decades in ratio of subhalo-to-host-halo mass – of the dependency of subhalo abundance with halo host mass as a function of subhalo mass, the maximum circular velocity of particles within the subhalo, Vmax, and distance to the host halo centre. We also dissect the evolution of these dependencies over cosmic time.

Subhalo structural properties are codified via a concentration parameter that does not depend on any specific, pre-defined density profile and relies only on Vmax. We derive such relation in the range 7-1500 km/s and find an important dependence on distance of the subhalo to the host halo centre, as already described in Moliné et al. (2017) for subhaloes in Milky-Way-like hosts. Interestingly, we also find subhaloes of the same mass to be significantly more concentrated when they reside inside more massive hosts. We provide accurate fits that take into account all mentioned dependencies. In addition, the study of the evolution of subhalo concentrations with cosmic time is very scarce in the literature as of today. We investigate such redshift evolution of concentrations and provide an accurate fit.

Our results offer an unprecedented detailed characterization of the subhalo population, consistent over a wide range of subhalo and host halo masses, as well as cosmic times. Our analyses enables precision work in any future research involving dark matter halo substructure.

Towards a more rigorous treatment of uncertainties on the velocity distribution of dark matter particles for capture in stars

Written by Thomas Lacroix.

Summary of the paper with the same title published in JCAP.

arXiv: 2007.15927

In the presence of non-gravitational interactions between dark matter (DM) and the standard sector, DM particles which permeate the Galactic halo can lose energy via elastic scattering with stellar matter, and wind up trapped in the gravitational potential wells of stars. This process is called DM capture, and has been extensively studied in the literature due to its ubiquity in studies that concern the effects, manifestations and/or signals of DM in stars. If they lose enough energy through elastic scattering with stellar matter and settle down in the star’s interior, DM particles may affect the well-known physical processes that govern stellar physics. This gives rise to a rich new phenomenology which can be used to explore the nature of DM itself.

In this work, in an effort to add some quantitative arguments to the discussion of the dependence of dark matter capture on the underlying velocity distribution function, we have estimated systematic uncertainties, from dark matter phase-space modeling, on dark matter constraints based on capture in stars, by using Eddington-like equilibrium phase-space models. These models are based on first principles and their main ingredients are the dark matter density profile and the total gravitational potential, so they self-consistently account for kinematic constraints on the Milky Way. As a result, although they are based on simplifying assumptions, these models are physically motivated and provide a next-to-minimal picture of the properties of the Galactic phase-space relevant to dark matter searches, with respect to standard approaches used in the dark matter literature. These models essentially address several problems at once: first the underlying mass model of the Milky Way constrained by kinematic measurements is by construction accounted for, so that the resulting phase-space distribution function is able to capture variations of the typical dark matter speed as a function of galactocentric radius, in a self-consistent way. Moreover, departures from the Maxwell-Boltzmann approximation, which is at the heart of the standard halo model, are automatically taken into account. Furthermore, the unknown anisotropy of the dark matter velocity tensor, which is an additional source of uncertainties on the phase-space distribution function, can be accounted for with these self-consistent prediction methods. This goes beyond approaches that empirically quantify departures from the standard halo model in the solar neighborhood, and provides a global physical picture of the dark matter phase space in the Galaxy and the associated systematic uncertainties on dark matter-capture observables. Our main results are the following:

  1. We have shown that using a Maxwell-Boltzmann distribution with a very simple estimate of 220 km/s for the typical dark matter speed—the standard halo model—in a region where it is expected to be significantly different based on the underlying mass content—typically at the Galactic center—leads to significant errors on subsequent results. For a Sun-like star within a few pc of the Galactic center, the capture rate can be underestimated by up to almost two orders of magnitude—depending on the dark matter density profile, dark matter candidate mass and the type of interaction with ordinary matter—compared with predictions from Eddington-like models. Even for neutron stars, which are less sensitive to the speed distribution, the capture rate can be underestimated by a factor ~2.5-4, thus overestimating subsequent upper limits on the dark matter-neutron scattering cross section by the same amount. This definitely needs to be taken into account and cannot be simply neglected based on the usual qualitative argument that capture in neutron stars is insensitive to the detailed properties of the phase-space distribution function. Here we have provided a quantitative estimate of this effect for the first time.
  2. We have shown that Maxwell-Boltzmann models that go beyond the standard halo model, i.e. which have a velocity dispersion that is either obtained from the circular velocity or the Jeans equation, can in some cases improve upon the standard halo model, but not always. On the one hand, the Maxwell-Boltzmann model based on the circular velocity systematically overestimates the low-velocity tail of the phase-space distribution function with respect to Eddington-like models, and thus overestimates the capture rate at any radius by a factor 2-3. On the other hand, we have shown that the Jeans Maxwell-Boltzmann model reproduces reasonably well (within a few tens of %) the Eddington result for a cuspy profile. For a cored profile, the influence of the baryonic components in the central regions is greater, and the Jeans model is not able to account for the departures of the phase-space distribution function from a Gaussian distribution that occur in the inner Galaxy. Still, even in that case the discrepancy between the Jeans model and the Eddington result remains smaller than for the standard halo model, and below a factor 2. Therefore, the Jeans model provides a very minimal way to estimate the phase-space distribution function that works reasonably well, provided the contribution of baryons to the gravitational potential is not much larger than that of the dark matter.
  3. Beyond the uncertainty on the inner slope of the dark matter density profile, which by far dominates the uncertainty on capture rates, the most important contribution to systematic errors on capture-related observables actually comes from the estimate of the typical speed of dark matter particles in the halo. Regardless of the actual model, it is crucial to make sure that capture rates do account for kinematic constraints on the target of interest, whether one is considering the Milky Way or another object. This is automatically accounted for in Eddington-like models. It can also be reasonably taken into account by considering a Maxwell-Boltzmann model with a velocity dispersion solution to the Jeans equation, provided the phase-space distribution function does not depart too much from a Maxwell-Boltzmann distribution due to the baryonic components, as discussed in the previous point. Furthermore, we have shown that properties of the dark matter phase space, such as the anisotropy of the dark matter velocity tensor, also play an important part in shaping the phase-space distribution function of dark matter in specific ways, and can have a significant impact on subsequent observables. In particular, we have shown that the uncertainty on the essentially unconstrained anisotropy leads to a systematic uncertainty of up to a factor 2 on capture rate predictions.
  4. Finally, an additional very important difference between predictions relying on the Maxwell-Boltzmann approximation and Eddington-like methods is that the latter provide full phase-space models, whereas the former — even the Jeans model — do not. This allows for a rigorous assessment of whether these models correspond to stable solutions of the underlying collisionless Boltzmann equation. This is crucial since it determines whether the phase-space distribution function used to derive constraints from capture is physical or not, and affects the size of the associated systematic uncertainties, which can only be trusted if they are based on physical models. In particular, this affects significantly the size of uncertainty bands from the unknown anisotropy, since some sets of anisotropy parameters must be rejected based on stability criteria.

Sensitivity of the Cherenkov Telescope Array to dark subhalos

Written by Javier Coronado-Blázquez.

Summary of the paper with the same title submitted to PDU.

arXiv: 2101.10003

In this work, we study the potential of the Cherenkov Telescope Array (CTA) for the detection of Galactic dark matter (DM) subhalos, focusing on low-mass subhalos – not massive enough to retain any baryonic content – therefore lacking any multiwavelength counterpart. As in previous papers, devoted to the Fermi-LAT and HAWC instruments, If the DM is made of weakly interacting massive particles (WIMPs), these dark subhalos may appear in the gamma-ray sky as unidentified sources. We perform a detailed characterization of CTA instrumental response to dark subhalos, using the ctools analysis software, simulating CTA observations under different array configurations and pointing strategies.

We distinguish three different observational modes: i) a key science project, the extragalactic survey (codename EGAL). This will observe a fourth of sky at high-latitudes with uniform exposure, providing unprecedented coverage at very high energies; ii) a proposed deep-field campaign (DEEP), which would point at a blank spot of the sky aiming to serendipitously find new sources, such as dark subhalos, due to the extreme sensitivity; and iii) a chance of finding a dark subhalo in the field of view of any of CTA’s science operations through accumulated exposure the years, so-called overall exposure (EXPO).

To be able to compute the latter strategy, one has to estimate the sky coverage in, e.g., 10 years of operation, as well as the median exposure time. We did so by extrapolating the MAGIC telescope operations, which share location with the CTA-North. In this way, we get a realistic estimation of the accumulated observations, which turn out to be a factor 2 more area and a factor 10 more time than the EGAL survey. This, together with information on the subhalo population as inferred from N-body cosmological simulations, allows us to predict the CTA detectability of dark subhalos, i.e., the expected number of subhalos in each of the considered observational scenarios.

In the absence of detection, for each observation strategy we set competitive limits to the annihilation cross section as a function of the DM particle mass, that are between one and two orders of magnitude away from the thermal cross section, for the bb and ττ annihilation channels. This way, CTA will offer the most constraining limits from subhalo searches in the intermediate range between 1−3 TeV, complementing previous results with Fermi-LAT and HAWC at lower and higher energies, respectively, as well as an independent probe of DM.

Predicting the dark matter velocity distribution in galactic structures: tests against hydrodynamic cosmological simulations

Written by Thomas Lacroix.

Summary of the paper with the same title published in JCAP.

arXiv: 2005.03955

In this work, we have quantified the level of predictivity and the relevance of some isotropic models of velocity distribution functions, by comparing their predictions for several observables with direct measurements in highly resolved cosmological simulations, providing realistic test galaxies where both the dark matter and the baryons are dynamically linked through their mutual gravitational interactions. The main question we addressed is the following: can a reliable, though simplified, galactic mass model be translated into reliable predictions for the speed distribution and related moments? Answering this question and further quantifying the reliability of the procedure is important in a context in which (i) dark matter searches have been intensifying on galactic scales, and (ii) observational data have been accumulating which can better constrain the dark matter content of target objects or structures. Moreover, discovery prospects as well as exclusion limits on specific dark matter scenarios would certainly benefit from better estimates or control of theoretical uncertainties.

In particular, we have tested a complete model €“’the Eddington inversion model’ encapsulating a full phase-space description of the dark matter lying in a self-gravitating object, built from first principles while based on several simplifying assumptions: dynamical equilibrium, spherical symmetry, and isotropy. This model, a generic solution to the collisionless Boltzmann equation, allows one to derive the phase-space distribution function of dark matter from the knowledge of its mass density profile and of the full gravitational potential of the system (both required to be spherically symmetric). Therefore, it can be fully derived from a galactic mass model, where the mass density distributions of all components are specified. We have compared this full phase-space distribution model with more ad hoc models for the velocity distribution only, based on declensions of the Maxwell-Boltzmann approximation; one inspired from the isothermal sphere where the peak velocity is set to the circular velocity, and another one in which the peak velocity derives from the velocity dispersion calculated by consistently solving the Jeans equation. These models were used to predict the speed distribution function of a system and several relevant speed moments, as well as relative speed moments.

Galactic mass models including a dark matter profile and several components for baryons, were fitted on three different highly resolved zoom-in cosmological simulations. These simulations were used in both their dark matter-only and their hydrodynamical configurations, the former resembling a would-be giant isolated dwarf spheroidal galaxies, and the latter resembling spiral galaxies similar to the Milky Way (the level of €œ’Milky Way-likeness’€ is not essential in this work).

We compared the model predictions for several velocity-dependent observables directly with the simulation data. This allowed us to estimate that the Eddington model provides a fairly good description of the phase-space distribution function of dark matter in galactic structures, reaching a precision of ~10-20% for velocity or relative velocity moments of order n = +-1, 2. It may perform better in describing dark matter-only systems than those with baryonic domination at their centers, with a precision degrading by ~ 10% for the latter — although this is not generic, as one of our galaxies was still very well described. It is rather surprising, and even remarkable, that such a simple model can capture the dark matter dynamics so well, especially when one considers the strong assumptions it is built upon. Indeed, none of our simulated objects exhibits perfect dynamical relaxation, spherical symmetry, nor isotropy. Still, the model is able to capture their main dynamical features. We emphasize that the Eddington model, in this context, provides a better description of realistic systems than typical declensions of the Maxwell-Boltzmann approximation used in the literature (in the sense of models, not Gaussian fitting functions). Considering the latter are lacking in terms of solid theoretical grounds in this particular context, this is rather satisfactory from the theoretical point of view.

This work provides a quantitative estimate of the theoretical uncertainties affecting the Eddington inversion in the context of dark matter searches, both in dark matter-dominated objects and in spiral galaxies similar to the Milky Way. We stress that these uncertainties do account for departures from local equilibrium, which are at play in our virtual galaxies.

CTA sensitivity to branon dark matter models

Written by Alejandra Aguirre-Santaella.

Summary of the paper with the same title accepted by JCAP.

arXiv: 2006.16706

TeV dark matter (DM) candidates are gradually earning more and more attention within the community, since there is no clear hint of DM signals in the GeV regime so far. One of these particles are branons, which could be detected with the next generation of very-high-energy gamma-ray observatories such as the Cherenkov Telescope Array (CTA).

Branons represent the vibrations of branes embedded into a higher dimensional space-time. These DM particles are WIMPs that may annihilate into e.g. a pair of quarks, a pair of weak bosons, or even a pair of photons, yet the probability for the latter to occur is extremely low. The branching ratio of annihilation into each SM channel depends on the mass of the branons and the tension of the brane. In case branons are considered thermal relics and their annihilation cross-section value is the one needed to account for 100% of the total DM content of the Universe, the tension is a function of the branon mass, and we are left with only one free parameter.

In this work, we study the sensitivity of CTA to branon DM via the observation of representative astrophysical DM targets, namely dwarf spheroidal galaxies. In particular, we focus on two well-known ones, Draco on the Northern Hemisphere and Sculptor on the Southern Hemisphere. For each of these targets, we simulated 300 h of CTA observations and studied the sensitivity of both CTA-North and CTA-South to branon annihilations using the latest publicly available instrument response functions and most recent analysis tools.

We computed annihilation cross section values needed to reach a 5σ detection as a function of the branon mass. Additionally, in the absence of a predicted DM signal, we obtained 2σ upper limits on the annihilation cross section. Our limits lie 1.5-2 orders of magnitude above the thermal relic cross section value, depending on the considered branon mass.

Yet, CTA will allow to exclude a significant portion of the brane tension-mass parameter space in the 0.1-60 TeV branon mass range, and up to tensions of ~10 TeV. More importantly, CTA will significantly enlarge the region already excluded by AMS and CMS, and will provide valuable complementary information to future SKA radio observations. We conclude that CTA will possess potential to constrain brane-world models and, more in general, TeV DM candidates.

ISAPP school: gamma rays to shed light on dark matter (Madrid, 21-30 June 2020)


We are pleased to announce the ISAPP school “Gamma rays to shed light on dark matter” that will happen in Madrid between 21-30 June, 2020.    

Webpage: https://workshops.ift.uam-csic.es/isapp2020madrid

The purpose of our School is to offer a general overview of the state-of-the-art of gamma-ray dark matter searches and it is primarily aimed at students at the MSc and PhD level, as well as young postdocs working in the field. Being an ISAPP school, it will closely follow the same format and spirit of previous ISAPP schools across the world.

The program will include introductory lectures on Astrophysics, Cosmology and Particle physics, as well as a series of more specific lectures on each of the main topics. We will also provide the students with some of the most useful computational tools in the field, by running a few hands-on sessions on specific yet quite standard dark matter work packages.

We are happy to count with some of the most renowned international experts for this exciting and intense scientific program!

Further information on program, speakers and contents can be found in the school webpage here.

SKA-Phase 1 sensitivity to synchrotron radio emission from multi-TeV Dark Matter candidates

Written by V. Gammaldi and M. Méndez-Isla.

Summary of the paper with the same title published in PDU.

arXiv: 1905.11154

Dark matter constitutes a fundamental piece within the paradigm of modern Cosmology, comprising ~25% of the energy density of the universe. Despite numerous evidence of the existence of dark matter, its nature remains elusive. Based on the study of thermal relics in the Early Universe, one possibility would be conceiving dark matter as particles. Indeed, the energy density of dark matter today could be explained in terms of Weakly Interactive Massive Particles that were coupled with the primordial plasma. 

Considering dark matter annihilating in galactic halos, it would be reasonable to expect signatures in the sky that may be observed through different detectors. This fact would allow us to constrain the dark matter parameter space comparing theoretical predictions with diverse observational data. In fact, there exists the possibility of dark matter annihilating into Standard Model particles that subsequently would decay or hadronise into cosmic rays. In this scenario, dark matter not only would constitute an exotic source of cosmic rays but also of photons in a large range of frequencies. Such photons are the result of the interaction of the abovementioned cosmic rays with the interstellar medium. Indeed, one possibility could be dark matter annihilating into electrons/positrons whose interaction with galactic magnetic fields would produce synchrotron signals, in general, at radio frequencies. In this sense, high-sensitivity radio telescopes, such as the Square Kilometre Array, could be crucial to put tight constraints on both the dark matter mass M and its thermally averaged cross section.

With the purpose of constraining radio signals from TeV dark matter candidates with SKA, we compute the expected flux density for different annihilation channels in the Draco dwarf spheroidal galaxy and we compare it with the SKA sensitivity. Varying the dark matter mass M and thermally averaged cross section, we set sensitivity constraints, as shown in the Figure below. In such a Figure, the region above the orange and blue curves show the dark matter parameter space detectable by the SKA. Furthermore, the intersection between the orange curve and the dashed black line representing a cross section of 3e26 cm^3 / s shows that the maximum observable mass for thermal relics would lie around 10 TeV for dark matter annihilating into W+W- and b quarks.

A similar analysis is performed for extra-dimensional Brane-world DM candidates, dubbed branons, i.e., new degrees of freedom appearing in flexible Brane-world models. This particular case is analysed for usual astrophysical scenarios as well as alternative ones in which the synchrotron signal would be enhanced by the presence of an intermediate-mass black hole. This latter possibility could be the key to observe dark matter masses beyond the 10 TeV detectable in conventional scenarios.

Finally, we compare the SKA facilities in dark matter searches with other detectors for different ranges of frequencies. Even though SKA is expected to be the most sensitivity telescope in radio frequencies, the most suitable frequency range to detect dark matter would be affected by the annihilation channel. In this regard, in our work we also analyse the role played by detectors such as GBT, VLA or LOFAR for TeV dark matter multi-wavelength searches.