To develop a conceptual model of a system, the conscious mind begins by linking simple concepts to form mathematical relations. For example, by realizing that the flow rate from a water tap depends on the number of tap turns, a mathematical model is developed. It is then possible to relate the volume of water collected to the duration it takes to collected it, for a given number of tap turns. Relating a system’s variables to each other correctly is all that is needed to develop a mathematical model. This simple water-tap example could be extended to predict the flow rate of water through any pipe. To do that, the model must include all relevant parameters that affect water flow, which include pressure head, pipe diameter, length, and surface roughness.
Developing a mathematical model of a system does not demand knowledge of the nature of parameters involved. However, it requires establishing the correct quantitative relationships between the parameters. For example, the nature of variables such as mass, energy, and electric charge, as well as constants in physics and mathematics remains unknown, yet the mathematical models in physics and in engineering are based on combinations of those parameters and are used successfully to determine unknowns and make predictions. In fact, the use of arbitrary units to quantify such variables, reflect our lack of knowledge of their true nature. Even the units referred to as absolute, such as temperature in kelvin, are only absolute in as far as their base line is set at absolute-zero, but the unit sizes are arbitrary.
Since our direct observation of the world tells us nothing about the nature of the physical quantities we deal with and quantify, it is very likely there is some degree of misunderstanding of some phenomena associated with them. Unless we reach conclusive knowledge of the nature of physical reality, we will not be in a position to confirm the correctness or otherwise of our interpretation of any phenomena. Although our interpretation of observations may give some weight to one theory over another, there is no guarantee that we are correct, hence the need for theorizing.
The fundamental reason behind the difficulties we encounter in determining the nature of physical reality is our inability to directly access it basic level, namely the quantum level. In fact, our ability to make direct observation is limited to a very narrow band in the spectrum of physical existence. We cannot directly observe the very small and neither can we observe the very large. Fortunately, given the continuity of physical reality through all levels of existence, mathematical logic is consistent across all levels. Therefore, we can depend upon it as a tool to infer and theorize, for our only way to make sense of the world is by checking theoretical predictions against observations at the levels we can access. However, information obtained near the ends of the range of existence accessible to us is at best limited, making it difficult for us to develop full working models. Incomplete information has been the cause of the bewilderment of those attempting to understand the behaviour of quantum and galactic level objects. In fact, incomplete information results in misunderstanding and misinterpretation of observations at all levels of existence. A classic example is the one concerning the geometry of our planet.
According to historic sources, the concept of Earth being spherical goes back to ancient Greek philosophy. However, most people believed Earth to be flat, perhaps a disc like, with variable terrain and surrounded by endless oceans. That is how it would appear to any observer with a limited view of its true geometry. When the idea that Earth could be spherical was proposed, people must have thought it was ludicrous. For starters, it would not have made sense that a spherical object would hold water on its external surface. In addition, people and everything else on the lower part of its surface would have been thought of as suspended upside down! Worst still, the idea that the planet could be spinning and rotating around the sun. That of course would have been much more difficult to believe, because people then could not have appreciated the effect of relative motion on such a scale. With the obvious daily motion of the sun across the sky, it would have been impossible to convince anyone that Earth’s motion produced that effect. Therefore, the argument that Earth was spherical, spinning and rotating around the sun could not have been believed.
Although people were able to conceptualize Earth as a sphere, the model they had in mind was incorrect. It was short on facts. It referred to a small sphere in Earth’s atmosphere. In fact, there were two interrelated factors, which people were not aware of at that time. One factor related to Earth’s surroundings and the other related to a concept known as the scale effect. Whereas Earth’s surroundings are open space, surroundings of the model people had in mind was Earth’s atmosphere, in which liquids would not remain on the external surface of a sphere.
The second factor that people were not aware of, namely the scale effect, relates to the effect of the physical size of a system. Before the development of advanced computer simulation, engineers built physical models of the systems they intended to build, in order to test them and investigate their behaviour before committing costly resources to constructing full-scale systems. In some cases, they still do. When the testing involves a system’s interaction with a surrounding medium, a serious problem arises. The problem is the surrounding medium could not be modelled. Whether it is air, water, or as we shall explain, the fabric of space or any other medium, the interaction between it and the systems that they support involves energy transfer. As such, the relationship between different scale models of the same geometry in any medium is a non-linear, so that a change in the test condition of one model could result in much more significant change in the behaviour of the same model at a different scale. For example, (1:50) scale model of a boat would not behave in the same way as (1:100) model of the same boat under the same loading conditions.
To resolve this problem, engineers construct a series of different scale models of the desired design and investigate the effect of scale on each of the relevant parameters. The collected data is then used to develop relationships between the different scale models for each parameter, which are then extrapolated to include the system at full-scale. The extrapolated values are used as factors, referred to as scale factors, by which a model’s parameters are multiplied to arrive at the correct value of the investigate parameter in the full-scale system.
If we accept matter particles as dynamics structures forming from the constituents of a fluid medium, then many and perhaps all outstanding problems in physics could be resolved, aided by physical modelling. Discarding the existence of this seemingly undetectable medium has often resulted in ambiguous outcomes from the relevant mathematical models, which relate to objects ranging from charged particles to clusters of galaxies. The results from the former produce inexplicable infinites and from the latter they have led to postulating the existence of dark matter and dark energy, each of which produced its own set of ambiguities. One example where the interaction between matter and the fabric of space is obvious yet discarded relates to the development of stars.
At some advance stage in a star’s life, the star explodes in what is known as a supernova. The accepted cause of such an event is pressure build up resulting from one of two possible mechanisms. One mechanism is the reigniting of nuclear fusion in a degenerate star and the other is the collapse of the core of a massive star. However, as we shall explain, neither of those two mechanisms could produce a supernova. Although they could result in significant increase in the core pressure, a star’s gravity would counteract any tendency for it to explode. Clearly, there is an unresolved paradox here between gravity as an attractive force that increases with mass and the eventual explosion due to increased mass! This paradox has however been mixed up with what is known as the Chandrasekhar limit.
Chandrasekhar limit is the upper mass limit beyond which a star would collapse under its gravity, overcoming its electron degeneracy pressure in the process. Electron degeneracy pressure results from the force which keeps electrons apart. When a star collapses after reaching that mass limit, regardless of whether it is reignited or condenses to a neutron star, it could only implode under gravity. There must therefore be another factor that comes into play causing it to explode in those circumstances.
If we accept the existence of a physical medium in space, we must concede that the behaviour of matter in space, like that of objects in a fluid, is subject to the scale effect. On this basis, the destabilizing factor that causes a star to explode must be related to the star’s level of interaction with the surrounding medium. The two relevant variables in the development of a star are the mass of the star and its spin speed. They are the only apparent parameters that change during the life of a star. In time, they increase simultaneously until the star explodes. In fact, the two must be directly related up to the point at which the star explodes, so that the greater the mass, the greater the spin speed. In any case, what leads to a supernova must be as follows:
As a star reaches a spin speed at which matter in its atmosphere begins to become detached from the surrounding fabric of space, pressure in its immediate surroundings drops. This process results from a subtle mechanism that takes place around a star when it reaches some critical spin speed, which causes matter in its surface to creates a layer of elements of the fabric of space of a purely tangential component of motion, with no radial component whatsoever— to find out about space definition refer to my post ‘π in the sky’. Such a layer marks the formation of the event horizon of a black hole. If subsequently a fully functioning black hole develops around the star, the star would lose all continuity with the fabric of space. Of course, the development of a black hole beyond an event horizon is a function of the mass of the star. However, it is also a function of the spin speed.
When an event horizon begins to develop and matter in the star collapses as a result of intense gravity, the star shrinks resulting in some significant increase in the negative pressure around it. Although matter collapse happens only at the centre, where pressure is at maximum, the entire volume of the star shrinks. The negative pressure around the star is maintained by the layer of event horizon encapsulating the star forming a shell like structure. This causes the outer layers to pulsate to balance gravity against the external negative external pressure. Thus, the star undergoes a cyclical process of contracting and expanding. When the star contracts, it appears dim and may even disappear from view. It disappears from view if the it losses all continuity with the fabric of space, in which case it becomes a black hole. However, if it does not lose all continuity with the fabric of space, when it expands, it bridges the low-density region of the fabric of space and thus appears brighter.
When the forces of gravity and negative pressure around the star are at equilibrium, further collapse of matter disturbs the equilibrium by increasing the negative pressure, causing the star to explode. Since the explosion is not driven by internal pressure, the core remains intact, but continues to pulsate. This core remnant is known as a neutron star, because overcoming the electron degeneracy pressure in the collapse of matter is believed to force the electrons to collapse into the protons causing them to decay to neutrons.
The collapse of an electron into the atomic nucleus cause a proton to become a neutron in a process referred to as electron capture. The excess mass generated in the process of forming the neutron emerges as a neutrino. Given the volume of matter involved in the collapse of a star, significant amount of neutrinos must be produced in the process.
As the case of any spherical object spinning in a fluid, negative pressure distribution around such a star would be maximum around the equator and minimum at the poles. This has implications on the visibility of the star by distant observers. Since negative pressure is maximum at the equator, when the star contracts around that zone, it expands much more pronouncedly along the line of the axis of spin. This causes the star to protrude abruptly and significantly into the fabric of space along that axis. Thus, it generates much more significant effect there than it does around the equator. That effect is detected by distant observers as radio waves and intense light beams of high-energy photons. This explains the observed radio signals and light beams emitted by pulsars. Although such stars may appear to have one bright side and one dark side as they spin, which is what some observers believed, that is clearly incorrect, because there could be no conditions that would produce such an effect.
In conclusion, matter collapse in stars due to intense gravity is not the cause of supernovae, it acts as a trigger and a catalyst in such events. If stars explode purely as a result of internal pressure, their explosions should include their cores, where pressure is at maximum. However, that is not what usually happens. Only the outer layers are blown off leaving the cores behind. More importantly is the conclusion that space in the universe must harbour a physical fluid medium in the form of discrete homogenous elements.