The universe is truly huge. It stretches for at least 13.5 billion light years (1.277×1023 km) in every direction (considerably further in fact), and its massive size poses an equally large problem how can you accurately measure how far away objects are?
Over such large distances conventitional methods of measuring distances (like using the time taken for light to hit and object and reflect back) become impractical and very, very slow.
Thankfully astronomers can use other methods developed over the years to deal with these monumental universal distances. The three main methods they can achieve this are:
- Cepheid variables
- and Redshift
Parallax uses changes in the relative positions of celestial objects throughout the year to estimate how far away they are using trigonometry. A star that is closer to the Earth will appear to move more than a star that is further away. The same principle applies when you look out of the window in a moving car, objects close to you wizz past while those in the distance move lazily across your field of vision for some time (provided you don’t move your head of course).
Parallax is only useful for distances that are relatively small as after a certain point (16,000 light years distant – which may seem like a huge distance but it is actually less than 1% of our Galaxy’s diameter, tiny in comparison with the rest of the universe) the relative position of a distant object changes so little it is impossible to measure with any accuracy.
The standard unit of parallax is the Parsec or Parallax Arcsecond. Without going into too much detail of how it is calculated, a star that is one parsec away will have apparently changed in position by a fixed amount in the sky. A star that is two parsecs away will have appeared to have moved 1/2 as much and so on and so forth.
One parsec is approximately equal to 3.26 light years, which puts Earth’s nearest star (after the sun) Alpha Proxima Centauri at 1.29 parsecs away (4.2 light years).
Parallax forms the ‘bottom rung’ on the universal distance ladder (covering the shortest distances), skipping over the middle rung for now and moving to top – redshift.
Not only is the universe really, really big it is expanding – quickly. This is a direct result of the initial explosion that sparked our universe’s formation – the Big Bang. This was not an explosion in any conventional sense, it did not expand from a single point, instead it happened everywhere in the universe at once. This means that even today every single object in the universe is moving away from everything else (yes it is mind-blowing). This expansion is constant no matter where you are in the universe (as far as we can tell though this is the subject of some debate) and has been measured at 70.4 km\s/Mpc (within a small range of uncertainty). Or in plain English, if two objects are a million parsecs away from each other they will be moving away from each other at 70.4 kilometres per second.
Now at first glance this may not seem all that useful in determining distances, but over large distances it is a very powerful method indeed.
As this rate of expansion is fixed and is occurring throughout the universe, and most importantly everything is moving away from everything else (thanks to the space around it literally stretching), an object that is 2 million parsecs away will be moving at 140.8 km\s, an object that is 3 million parsecs away will be whizzing away from us at 211.2 km\s and so on.
That means we can infer the distance of an object by measuring its relative velocity. The only question is how do we measure the velocity of something that is literally half way across the universe? The answer comes from a somewhat unusual source.
Light as most people know travels at a fixed speed (3×108 m\s) in a vacuum. Regardless of what you try to do it, its speed is constant. As the universe expands you would expect light to appear to slow down as it has to travel further, in actual fact it stretches with the universe.
This manifests as in increase in the light’s wavelength (the distance between two adjacent, identical points on a wave). An increase in wavelength means the light has ‘lost’ energy and appearers redder hence the term redshift. Redshift is calculated by measuring the difference between observed spectral features in cellestial objects and comparing them to measurements taken here on Earth i.e. where redshift = 0. Redshift can be shortened to z and is the only practical method of determining distances to objects far into the universe.
So far we have dealt with Parallax which is used for very short distances (in universal standards), and Redshift which deals with very large distances. Whilst these are each very useful to have there is a reasonable gap between the two meaning that some objects within our own galaxy could not be distanced with any accuracy, and we would have no way of checking the reliability of either as there is practically no overlap in the measurement range of the two. This means that the results obtained for both methods could not be compared and checked to see if they agreed with each other.
Thankfully there is a middle rung on the cosmological distance lader – the Cepheid variables.
Cepheid variables (more accurately Population I Cepheids or Classical Cepheids) take their name from the second to be discovered - Delta Cephei.
They were once main sequence B class stars like many of those found within the Pleiades open star cluster (right).
These stars are all somewhere between 4 and 20 times the mass of the sun and spent just a few million years as a main sequence star fusing hydrogen into helium, before departing the main sequence and evolving into the supergiants we see today.
Such yellow supergiants are inherently unstable, having regular pulsations as the interior of the star changes in cycles which alter how much radiation the star retains causing it to swell and then contract as radiation is released into space at a faster rate causing it to return to its original size.
This cycle repeats again and again, in a relationship that is directly linked to the luminosity of the star. The more luminous the star the longer its pulsation period (the time taken for one expansion phase followed by one contraction phase). As this is fixed for all Cepheid variables regardless of their distance from Earth, two stars that have the same pulsation period with differing luminosities are at different distances.
That means that relative distances can be calculated by comparing Cepheids. Whilst that would be useful in itself there is more to the Cepheids, some of them are close enough to have their distance calculated by Parallax and other methods. So with a starting point for comparison the actual distances of all Cepheids regardless of their distance can be calculated accurately.
Cepheids have also been used to refine our results and estimates of important cosmological constants including values for the Hubble constant – a measure of how the universe is expanding.
So far around 700 classical Cepheids have been identified within the Milky Way, with an overall total of several thousand known as far out as NGC 4603 – 100 million light years away. Cepheids have been detected, and have had their properties measured, at distances at which redshift has become detectable bridging the gap between the two measurements and providing a way of comparing and confirming calculated distances for the very close and very distant universe.
For our mathematically minded readers (anyone else is welcome to have a look but if the site of an equation makes you cringe you are more than welcome to skip on ahead) I will now detail how to first calculate the luminosity of a classical Cepheid and then how to use the luminosity to calculate how far away the star is from Earth.
For the first stage all that is required is the pulsation period of the star. This is obtained by monitoring the star’s brightness over a long period of time and identifying the time for the star to dim from its peak brightness to its dimmest and to brighten back to its peak again. This is recorded for several cycles and averaged to reduce uncertainties in the measurements and to minimise the effect of any mild random fluctuations in the star itself.
The value for the period – P – is then substituted into the equation:
It looks rather complicated but the equation really isn’t, if you are careful it can even be done in one step using a calculator!
In this case M is the mean absolute magnitude of the star – a measure of distance adjusted brightness.
The second stage in the calculation is essentially a comparison between the mean absoultude magnitude and the mean apparent (average observed magnitude over the full pulsation cycle). Is this case the larger the difference the greater the distance involved – as of course, the further away an object is the dimmer it appears.
The equation we use in this case is:
- M is the calculated mean absolute magnitude
- m is the observed mean apparent magnitude
- and D is the distance in parsecs
This formula can in turn be rearranged to give an equation where D is in the subject:
And all you need do is plug in the numbers.
For example lets imagine we measure Cepheid to have a period of 51 days and a mean apparent magnitude of 18.5.
Its mean absolute magnitude would be:
-2.78(log51)-1.35 = M = -6.10 (Try it yourself if you don’t believe me!)
So now if we substitute M and m into the distance equation we get:
D = 104.92Parsecs
Which can also be written as: 83200 Parsecs or roughly 1.6 times the distance to the Large Magellanic Cloud.
There you go, not so scary after all!
An example of a Cepheid Variable (left) with its spectrum (below).
Recently the distances calculated using Cepheids have been called into some doubt with concerns over the stars themselves.
It has long been though that their may be some change in the period of the star’s pulsations. This was predicted as stars slowly use mass over time as they blow out a fraction of their mass as their solar wind and in the emission of thermal and electromagnetic radiation (heat and light).
This mass would then form a shell of material surrounding the star. This dust would absorb visible light from its parent star and re-emit it in the infra-red frequency range. Thus a star would appear dimmer than it truly is in visible light and brighter in infra-red light. Despite the differences being quite small it would introduce large uncertainties in the calculated distance of the star which in turn would cause serious inaccuracies in the values calculated from such measurements, thus proof of this mass loss and a way to compensate for it is vital in maintaining the integrity of the cosmic distance ladder and cosmology as a whole.
Recently, astrophysicists using NASA’s Sptizer Space Telescope have confirmed this mass loss by taking detailed images of the class’ namesake Delta Cephei. They show an intense bow shock surrounding the star, caused by the high speed interactions between the star’s stellar wind and the surrounding interstellar gas and dust.
Other observations also show similar bow shocks surrounding at least 25% of currently known Cepheids.
With these new measurements of mass loss a correction program can now be used to compensate for the reduction in luminosity of the variables and so allow for better measures of distances and more accurate calculation of the variables that define our universe.
You can read more on Cepheid Variables here
The ESA’s Planck Space Telescope completed its mission on Saturday.
The mission was designed to peer into the detail of the cosmic microwave background radiation (CMB) – the residual energy left after the Big Bang.
Planck also used its microwave detectors to gaze at the cold dust within our galaxy and beyond, detecting many new galaxy clusters in the distant universe. Some of these even appear to be interacting and merging to form even larger superclusters.
The first data from Planck was released last year and included the improved catalogue of galaxy clusters, though the first data set on its study of the CMB is yet to be released, though this will be made available to scientists outside the project in the early stages of 2013.
The mission was originally planned to make two surveys of the entirety of the sky over the space of 15 months. Planck performed better than expected and completed five surveys over 30 months, double the original mission expectancy.
The data released so far also reveals that stars in the universe were being formed at one thousand times the current rate, a fairly phenomenal statistic!
The telescope is equipped with two instruments:
- The High Frequency Instrument or HFI
- ow Frequency Instrument or LFI
These two instruments work in tandem to build up a highly accurate map of the CMB. Unfortunately the HFI is now offline as the spacecraft depleted the last of its coolant supply and has now warmed above the critical temperature required for the useful opperation of the detector. The LFI however is still in working order and will continue to provide additional data over the rest of the year.
No doubt the data from Planck will reveal many new interesting features of the universe over the next few years, I for one am very excited!
You can read more here.
The first stars coalesced from the primordial hydrogen and helium synthesised in the first few minutes after the birth of our Universe just a few hundred million years after the Universe burst into existence.
All stars are born when a cool clump of hydrogen and helium gas condenses under its own gravity. As it does so the central regions of the clump heat up eventually to the millions of Kelvin required for nuclear fusion to occur. Once this critical point is reached the star fuses hydrogen to helium and begins to shine brightly. Stars grow by absorbing more material onto their surfaces from the surrounding nebula and hence they grow in mass. Eventually their increased energy output disperses the surrounding cocoon to the point where the star’s gravity is insufficient to attract anymore material and the star ceases growing.
In the current era of the universe this process is aided by heavier elements such as carbon and oxygen, these aid the cooling of the surrounding gas which allows for it to fall onto the star – if the gas was too hot it would have too much energy and escape off into space. The first generation of stars did not have any heavy elements to aid their formation as they had yet to be produced as such elements are only produced in the final stages of a massive star’s life. Thus they would have had a much shorter time to absorb matter from their surroundings before it escaped their clutches.
This may suggest that the first stars should be much smaller than those currently populating the Universe, though this is not the case. As the density of material in the early Universe would have been much higher they would have more material with their grasp to begin with and so all predictions produce stars that are much more massive than the current average – which is only a fraction of the mass of Sol.
It had previously been though that the first generation of stars would have been the most massive of all, with material equating to several hundred solar masses, similar to the most massive stars currently known – such as those within the Large Magelanic Cloud’s R136 open cluster. This new study casts doubt on this view on the first stellar newborns.
Simulations conducted at NASA’s Jet Propulsion Laboratory in California by a team of researchers lead by Takashi Hosokawa have indicated that such first generation stars would be ‘only’ a few tens of times a massive as Sol.
Whilst they are still significantly more massive than the vast majority of stars in today’s Universe this discovery removes a significant problem for Cosmology.
If the first generation of stars were as massive as previously predicted they would have produced a particular chemical signature of elements during their lives that would be detectably different from those of conventional mass stars – the higher mass equates to a higher core temperature – combine this with the unique blend of elements available to produce the first stars (~75 Hydrogen, ~25% Helium with tiny traces of Lithium, Deuterium and Beryllium) there should be a ‘bar code’ of the particular abundances of various heavy elements (as fusion reaction rates are easily altered by starting conditions) in the oldest of today’s stars that could only be attributed to the existence of such monstrous precursor stars. As of yet this has not been detected.
Whilst this proves to be a problem for the old model of 1st generation stars, the revised masses produced by this study removes it all together – with no such high mass stars the elemental fingerprint would have never have existed easily explaining the lack of detection – there is nothing to detect!
This fingerprint of elements would have been generated by the stars ending their lives as a yet undetected variety of Supernova. The new mass predictions would cause the 1st stars to detonate in a manner akin to their more recent brethren, allowing for the lack of unusual supernova detections to be explained satisfactorily as well.
As we have seen the masses of the 1st stars are now thought to be much lower than first expected but why would they be less massive than previously predicted?
The answer lies in the surroundings of the stars. The new simulations indicate that the region directly around a forming first generation star was heated to temperatures of up to 50,000 Kelvin or 8 ½ times the surface temperature of Sol. This extremely high temperature would have allowed the surrounding gas to disperse much more quickly than previously thought, thus the growing star has less time to absorb material leading to a lower final mass.
The simulation produced a star of just 43 solar masses, a far call from the 1000 solar mass superstars of early predictions (though I should add as predictions became more advanced the mass of the 1st generational stars has been falling continually and this is the latest study in a long line to reduce the value further).
You can read more here
- The Worlds with Two Suns | The Young Astronomers on Binary Stars Blitzed – Updated
- Ed.A on Image of the Week – A Peculiar Pencil – 18/09/2012
- Saint on SS 433 – A Magnificent Microquasar
- SS 433 – A Magnificent Microquasar » The Young Astronomers on Binary Stars Blitzed – Updated
- John Fairweather on A Star’s Death Giving Life to a Monster – Recovered
- New Post from @Lightbulb500 - The Worlds With Two Suns - bit.ly/RUQKuk 7 months ago
- We will also be posting about our plans for the next while both here, on the blog and our Facebook page - on.fb.me/RUQCuA 7 months ago
- Sorry for the long delay in posts, we have all been very busy. We will hopefully have a more regular post program shortly. 7 months ago
- Our latest Image of the Week highlights the star cluster NGC 1929 and the surrounding nebula N44 - bit.ly/QbkwY6 - by @Lightbulb500 8 months ago
- New post by @Lightbulb500 - How to Understand Spectra – Part 2 - bit.ly/NveYoX 8 months ago
TagsAGN Astronomy Astrophysics Big Bang Black Holes Cassini Chandra Curiosity Emission Nebulae ESA ESO Exoplanets Galaxies Gravity High Mass Stars HST Hubble Hubble Space Telescope Image of the Week Infra-red IOTW ISS Kepler Life LMC Mars NASA Nebula Nebulae Planets Russia Saturn Solar System Spacecraft Spitzer Starbirth Star death Star Formation Stars Star Sailor Podcast Supernova Supernovae VLT WISE Young Astronomers