top of page
Image by NASA

News stories are often published about people with IQs higher than Einstein's, including some children 10 years old and younger. Are these news true?

By Hindemburg Melão Jr.

 

Yesterday (3/30/2022) the news of the discovery of a star located 28 billion light-years (*) was published using the Hubble Space Telescope. Unfortunately, I couldn't find the original article on Arxiv.org or ResearchGate, but based on the data released on the NASA website, enough data can already be found to identify an error in the estimate.

 

The mass estimate presented in the paper is around 50 to 100 solar masses, but the correct mass should be much smaller. More data would be needed to calculate with greater precision, but in a preliminary estimate, if you wanted to try to hit a narrow target, you would place between 7 and 30 solar masses, and if you wanted to make a “smart” estimate (**) to be closer of the correct value that the author of the article would say 49.9999 solar masses, so practically any value below 50 would favor me.

 

When I started reading the article , my first thought was that it must be some object of an unknown class, perhaps some transient with slow evolution, so it had not yet been recognized as such. I still haven't ruled out that possibility. But if it is indeed a star, it must have a mass close to the Eddington limit, 100 to 150 solar masses, for it to be possible to register it even though it is so far away. In the current model of stellar evolution, a star cannot maintain its hydrostatic equilibrium if its mass is greater than this limit, and in order for it to be possible to detect such a distant star, without allocating years of exposure time pointing exclusively in its direction, it would need to have as bright a light as possible. By the Stefan-Boltzmann “Law“, it is known that luminosity is closely related to mass, theoretically by a relation of the type L ∝ M^k, where k = 4, but the empirical data show a slightly different relation, where the value of k varies between 2.5 and 4, depending on the metallicity and other properties of the star.

 

The fact is that the greater the mass, the greater the luminosity. So the mass would need to be large, perhaps on the threshold of what physics as we know it would "allow", so 100 to 150 solar masses would be an adequate estimate. The Eddington limit does not explicitly state an upper limit on mass, but rather a limit on mass with respect to luminosity, and since one depends on the other, this amounts to an asymptotic limit for mass, but it is not a well-defined limit because the value of k is not constant nor is it well known. Anyway, the title of the article and the first lines led me to these reflections. Then comes the author's estimate that the mass was about 50 to 100 times the mass of the Sun, which seemed pretty plausible to me, corroborating my initial impression, and I read on.

 

Shortly afterward, I came across information that its brightness was being magnified thousands of times by a gravitational microlensing. This is the key point for detecting the error of the estimate. If there were no gravitational lensing involved, I would expect the mass to be really very large, close to the Eddington limit, but with this additional information the situation changes completely.

 

The problem is that the size of the uncertainty about how many times the gravitational lens is “intensifying” the brightness of this star is very large, easily reaching more than 3 or 4 orders of magnitude, while the uncertainty in the mass, while also large, is comparatively smaller, perhaps around 1 order of magnitude. This requires a Bayesian approach in order to arrive at a realistic and reasonably accurate estimate of the mass. As the uncertainties are very large, a reasonable way to handle the problem is to use logarithms of the variables, rather than using the variables themselves.

 

In this context, the mass needs to be estimated based on parameters that take into account the following points:

 

  1. Distribution of frequencies of stars as a function of mass and luminosity, which determine the time spent on the main sequence. The greater the mass, the shorter the time spent on the main sequence, therefore, the lower the relative abundance of more massive stars and the lower the probability of a randomly selected star having such a mass.

  2. Distribution of light intensities magnified by gravitational lenses.

  3. Theoretical upper limit of mass a star can have.

  4. Time elapsed from the origin of the Universe until the first stars began to form. In the current model, this limit is estimated to be between 100 million and 250 million years after the Big Bang.

  5. Probability of formation of nebulae with sufficiently concentrated heterogeneities to form stars with different masses.

  6. Other parameters could also be considered to make the estimate more complete, but as two of them are very inaccurate and imprecise, it would be useless to try to refine the calculation if part of the data necessary for this refinement is not available.

 

About 2500 gravitational lenses are known, but I have not found data on the distribution of intensity levels of these lenses. So we don't have the second parameter. But parameters 2, 3 and 4 are available. The fifth parameter will be commented on in the next paragraphs.

 

The upper limit of mass, as already mentioned, is about 150 solar masses, and the residence time on the main sequence (t) as a function of mass is approximately given by t = e^[9.1 -2ln(M) ], where M is the mass of the star in solar mass units and t is measured in millions of years. In the case of the Sun, for example, when applying this formula, we arrive at 9 billion years for its time spent on the main sequence. For a much more massive star like Rigel, 21 times the mass of the Sun, the main sequence residence time is about 20 million years.

 

In the case of this newly discovered star, if its mass were indeed 50 times that of the Sun, its residence time on the main sequence would be about 3.6 million years. And if its mass were 100 times that of the Sun, its sojourn on the main sequence would be only 900,000 years. Much less massive stars, such as red dwarfs of spectral class M8 or M9, can remain trillions of years on the main sequence. But in this case, as the Universe was only 800 to 900 million years old, and the first stars had only formed between 600 and 800 million years ago, then the relative abundance of stars with a mass less than 3.8 times the mass of the The Sun was not much greater than the abundance of stars with exactly 3.8 solar masses, because there was not enough time for stars with a "life expectancy" below 600 million years to have run out of nuclear fuel.

 

But residence time on the main sequence should not be the only criterion for determining relative abundance. As the names of the variables that I will quote next are long, I will give them short names:

 

  • “main sequence dwell time” I will call “tp”

  • “probability of formation of nebulae with sufficiently concentrated heterogeneities to form stars with different masses” I will call “pm”

  • “relative abundance of stars of different spectral classes” I will call “air”

 

tp should not be the only criterion for determining ar. It would also be necessary to consider the pm. Lower concentrations should be more likely than higher concentrations. However, tp is not a directly measured variable. It is a variable calculated from the (incorrect) hypothesis that tp is the only relevant criterion to determine ar, so when empirically observed ar is used to calculate tp, other parameters are actually already included in this calculation, including to pm

 

Therefore, the currently calculated tp are incorrect, because they do not treat separately how much each variable, ar and pm, contributes to the determination of tp. The absence of pm data increases the uncertainty in the calculation.

 

So, we have the following scenario:

 

If a star were drawn at random at that time, the probability of its mass being greater than 3.8 times the mass of the Sun would be much less than the probability of its mass being less than this value. The lack of information on pm makes it impossible to determine this probability distribution for masses less than 3.8, but we know that the probability that the typical mass at that time was less than this value is greater than the probability that it was greater . So we can estimate that the average mass at that time was about 2 solar masses. We can also estimate that the abundance of stars with a mass of 3 solar masses or less was 200 times greater than the abundance of stars with a mass of 50 solar masses or more.

 

On the other hand, the smaller the mass, the greater the intensification produced by the gravitational lens must be to produce the brightness that was recorded, and the lower the probability of formation of a gravitational lens with the necessary specificities to achieve this effect. On the one hand, the entire universe is filled with deformations produced by gravitational fields, this is very common. On the other hand, a “gravitational lens” requires that the curves produced are very specific, forming a caustic whose cutting plane is perpendicular to the optical axis of the observer and this same optical axis needs to pass very close to the object whose light will be intensified. To make the correct calculation, it would be necessary to know the frequency with which gravitational lensing with different intensities occurs, but in this case, we have almost no information about this frequency distribution. We can assume that the uncertainty in the log of this variable is 3 to 4 times greater than the uncertainty in the mass estimate, as the “worst” case, and is equal in both cases, in the “best” case.

 

Since we “know” that the mass cannot be greater than 150 solar masses, we need to find the point between 2 and 150 dollars masses at which the probability balances out. If there were no such gravitational lens, the star would not be visible, or it would need to be hundreds to thousands of times more massive than the Sun, which would be inconsistent with the physics we know.

 

I would need the data from the original article to correctly determine this, because on the Hubble website there is no information on how long the exposure was.

 

The reported mv is about 27.2, so its absolute magnitude needs to be about -17.5 to be consistent with the observed redshift. This corresponds to 170 masses of the Sun if you set the value of 4. If you set the value 3, the mass would be 900 times that of the Sun. A reasonable value for k would be about 3.5, in which case the mass would be 350 times that of the Sun. At this juncture, the apparent brightness combined with the intensification of this brightness caused by the caustic would determine the mass according to the table below:

EARENDEL.png

 

The greater the enhancement caused by the lens, the smaller the mass needed to produce the detected apparent brightness. The estimated mass was assuming a lens magnification of around 300 times. But the uncertainty in this value is very large, reaching thousands of times, and for very distant objects, a small deflection in light can produce a gigantic increase in the brightness intensity. That's why a gravitational lens can increase the luminous intensity of an object millions of times, as long as its positioning in relation to the masses that are causing the distortion is very specific. And that seems to be exactly the case.

 

Without knowing the distribution of intensities of the different registered lenses, there are not enough elements for a more accurate estimate, including because the magnification of the brightness intensity is not caused by the lens itself, but by the alignment of the object with a certain region of the lens, because the A gravitational lens is not a carefully polished caustic with a curvature designed to give us a specific optical effect. It is the result of chance, which by “luck” promoted a curvature in space-time in a certain region, a curvature full of irregularities, but which in very specific areas produces some “desirable” effects like this. So it's not something that you can theoretically calculate based on some simple model, but something that needs empirical data from which you can check how often different levels of light intensification occur.

 

It is also important to remember that objects in the universe are moving relative to each other, so the extremely precise alignment that is currently forming between that star, the caustic and Earth is changing, and as a result the brightness of the incoming star. up to us must vary as the relative position of the lens changes. This makes it difficult to distinguish a star from a nova or supernova, because depending on lens properties and the alignment of objects, as the light from the nova or supernova increases, a reduction due to the lens may be occurring, resulting in an approximate compensation under the point. point of view of the observer, who would perceive as if the brightness hardly changed. Of course this would require a very specific configuration, therefore very unlikely. But among billions of astronomical objects cataloged, it is not so surprising that one of these objects finds itself in a situation that produces such an effect. So you still can't rule out the possibility that it's simply a nova or supernova, or some other slower transient. If so, at some point the brightness should start to persistently decrease until it disappears. Even if this is not the case, the brightness should also start to decrease as the collimation of the optical system goes downhill, but it will be a slower process and with a different light curve from the typical curves of novae and supernovae. It is even likely that the lens has irregularities that make the brightness increase and decrease during this process, with oscillations that can last a few months or days, while the evolution of the light curve of a nova or supernova only increases once and then only decreases ( unless there is another transient nearby).

 

So an estimate between 7 and 30 solar masses seems to be more realistic for this star than 50 to 100 solar masses. Perhaps between 3 and 45 solar masses is a safer range for the estimate.

 

 

 

----------------------------

(*) Several news reports mention the distance of 12.9 billion light years, but what this number actually means is the time it took for the light emitted by the star to reach Hubble's sensors. This is not the same thing as the distance from the star, because as the Universe is expanding, the distance is much greater. The term used in cosmology is “comomovable distance” to indicate the correct distance, and the term “light travel time” to refer to the interval of 12.9 billion years.

 

(**) When Einstein performed his first calculations on the angle of deflection of light when passing through the vicinity of the Sun, in 1911, he found the value 0.83”, while the value predicted in 1801 by Johann Georg von Söldner, based on the Newtonian gravitation, it was 0.84”. There are disagreements about whether Einstein knew about Söldner's works, even Einstein was accused of plagiarism for not citing Söldner in his 1911 estimates, especially due to the high similarity between the results, found by very different methods. When Einstein remade his calculations in 1916, he arrived at 1.74”, which was consistent with the results measured in Principio (1.61”) and with one of the instruments used in Sobral (1.98”), but the results obtained with another instrument by Sobral (0.93”) were favorable to Newton. A more complete analysis of this episode can be found in my article on the 1919 eclipse.

 

This is a story that saddens me deeply, because Söldner was self-taught, who did not receive a formal education, like me, and made one of the most important discoveries in history, 100 years before Einstein, a discovery that projected Einstein worldwide. Söldner was also the first to determine the value of the Ramanujan-Söldner constant (1.451369...), several decades before Ramanujan was born.

 

The controversy over whether Einstein was aware of Söldner's work when he published his first estimates, while a complex case, seems to have the pacified understanding that perhaps Einstein knew, but regardless of whether he knew it, Söldner's results were a "lucky shot". ”, while the works of Einstein represented a true revolution in Physics. This is true, Söldner just made a naive calculation of an elementary hypothetical situation: if particles moving at 300,000 km/s passed through the vicinity of a gravitational field with an acceleration of 274 m/s^2, what would be the deflection angle of these particles ? He did not even remotely consider all the philosophical and scientific implications that this could have, if only because, before the Michelson and Morley experiments, before the Maxwell and Lorentz equations, there was no way to interpret this phenomenon in the way that Einstein interpreted it. . It would be like the “black star” of Laplace and Mitchell, who were anticipations of the later concept of a black hole, but both considered only a naive situation in which light could not “fail” to escape from an object whose mass and radius met certain criteria. . Schwarzschild was the first to make a proper interpretation of the peculiarities that an object with these characteristics would have, and even if Schwarzschild knew of the works of Mitchell and Laplace on this subject, it would make no sense to accuse Schwarzschild of plagiarism. That is why the accusation of plagiarism against Einstein is unfounded. On the other hand, if Einstein really knew about this work by Söldner, it would have been elegant on Einstein's part if he had quoted Söldner, that would not have diminished Einstein's merits and would have given Söldner the merits that belonged to him.

 

“Futurists” such as Democritus, Aristarchus, Anaximander, Leonardo Da Vinci and even Jules Verne and Icaac Asimov are recognized for anticipating, albeit in a very vague way, later discoveries. In the case of Söldner, these were not vague predictions. He concretely calculated the deviation in 1801, arriving at the result of 0.84”, and Einstein himself arrived at basically the same result in 1911, so the merits of Söldner in this case were greater than that of Democritus in relation to the atom, and comparable to the Aristarchus' merits in relation to Heliocentrism, or even greater, because Aristarchus' heliocentric theory was in strong conflict with Aristotelian Physics, which was the prevailing paradigm, while the corpuscular theory of light was much more plausible at the time when Söldner suggested the deflection of light in the vicinity of an intense gravitational field. Therefore, while Söldner's merits are not comparable to those of Einstein, they were sufficiently remarkable for him to deserve at least recognition comparable to that of Aristarchus.

 

OK. But what does this episode involving Söldner and Einstein have to do with this estimate of the mass of the star Earendel? The fact is that if Einstein knew Söldner's text in which he had calculated that the deviation would be 0.84”, and if Einstein believed that the correct deviation should be smaller, then, considering that in 1911 there was still no analytical solution for these equations, the best Einstein could do was an estimate, and that estimate may have been heavily influenced by Söldner's earlier results. Suppose a TV show has a certain number of beans in a spoon, and two people have to try to guess what the number is. The first person estimates at 84 beans. The second person does not need to get the exact value right. It just needs to get closer than the first. Therefore, she only needs to estimate if she considers that the correct value is more than 84 or less than 84. If she thinks it is 45, it makes no sense for her to answer 45, because in that case she would only win if the correct value was 64 or less. If she thinks it's 45, then the smartest number to “guess” is 83, because in that case she would win for anything less than 84. That's basically what Einstein did. Until 1919, the vast majority of scientists did not think that light would undergo such a deflection, but if there were, and this deflection was smaller than Söldner predicted, then the best guess Einstein could have would be 0.83”, because then any value below 0.84” would be closer to his prediction than the prediction based on Newtonian Gravitation. As Einstein had not yet found an accurate and precise value for this deviation, a natural thought for Einstein could be:

 

'''If the deviation were large, it would have already been measured, as current instruments (~1910) already allow measuring parallaxes smaller than 0.5”, so it is more likely that this deviation is smaller than 0.84” than larger.'''

 

While this is just speculation about what Einstein might have thought, it's a pretty plausible speculation. Reverse engineering, knowing that Einstein "guessed" 0.83", and knowing that this would be the best value he would guess if he knew Söldner's 0.84", these details seem to fit together nicely.

*
**
bottom of page