In 2015, a paper published by George Becker caused a small revolution in the field of Reionisation. The process of Reionisation is what enabled the Universe to become transparent to starlight by destroying the neutral hydrogen left over litterally all over the place by the Big Bang. In that paper, it was discovered that this ionisation of the Universe is a lot let homogeneous than previously thought, with large patches of the Universe still quite neutral while others are already transparent. Because (proto-)galaxies are very common and very homogenously distributed across the Universe at the time, it seems like they cannot be the main actors in destroying the neutral hydrogen, otherwise it would proceed far more smoothly. Instead, it was suggested that quasars, which are extremely rare, extremely powerful accreting supermassive black holes, are responsible. In this interpretation, the large variations in neutrality are due to the rarity of the sources which are capable of ionising the gas. (although, a few research groups have claimed that galaxies can still make it happen on their own. this is definitely hotly debated, but the ‘rare sources’ hypothesis seems more popular in Cambridge, UK)
This hypothesis has been tested in simulations of Reionisation and it does appear to work quite well (although 2 other competing models do too). But simulations at the moment don’t include enough physical effects to make sure that the argument is water-tight. The black hole itself is not simulated, instead ‘source particles’ are used; and the smallest units of gas are sizes of 10000 stars or more. This is necessary because otherwise the simulations simply couldn’t be run, and the approximations are not as bad as it sounds. However, it is always good to add more physics.
In this paper, a team from UCL tests one of those previously-ignored side-effects of the ‘quasars did it’ scenario: beaming. Previous simulations all assumed that ionising photons escape from quasars in all directions, but in reality they travel down jet-like funnels with an opening of roughly 30 degrees. This will (should!) change the power spectrum of ionisation considerably. The power spectrum basically tells you on what scales things are non-homogenous/correlated, or in other words, the typical spacing between ‘things’. This is a bad explanation.
The paper is quite mathsy so let’s skip straight to the results. I made this schematic diagram above. The only important lines are the red and the thick black.
The important thing here is that if the quasar funnels are very small (thick black) then the only ones we can see are pointing directly at us, therefore there are a lot more of them than the ones we detect. To match the total number of photons needed for Reionisation (which we know) each quasar has to be individually weaker. The red line shows what happens in the bad assumption that there is no beaming.
In region I, we are looking at small scales. The power in small scales is boosted a lot by the variation around each quasar (the size of the beam is small), but because the quasars are weaker, the effect cancels (the contrast between being close/far from the quasars is weaker). Therefore the models roughly agree in I, only a 5-15% difference.
In region II, the very large scales, the distribution is dominated by the density field because that’s where the ‘stuff’ (everything!) is, including the quasars. If the quasars are smaller and more numerous, their effect is more smeared out and the variation on the largest scales is decreased overall.
In region III (intermediate scales) , something weird happens because neutral gas also follows the density field, so strangely enough the distribution is completely featureless at some scale. This is worse the smaller the beams are, because it happens in more places.
Unfortunately the only scales we can observe with currently existing instruments are the smallest ones, where the effect of including beams is the weakest. This is bad luck because we most likely won’t be able to rule one way or the other.
Paper: ads