Gravity and the standard model

The UP hypothesis, which is the subject of my book ‘Physical Reality – the fabric of space’, describes physical reality in terms of the behaviour of the fabric of space and the interaction of its constituents. The hypothesis defines the fabric of space as a medium of oscillating spherical and massless elements that give rise to matter particles as localized dynamic structures, with mass being the background vacuum exposed by the dynamics of the elements forming the particles. It defines energy as the motion of those elements relative to the observer and identifies two types of motions induced by matter particles in the surrounding medium— one is oscillatory and the other is uniform angular motion. Whilst we distinguish the former as thermal energy, the latter represents quantum fields rotating around the particles that induce them. Other types of motion of the elements are possible, but they are not produced by individual matter particles. Rather, they are the result of the action of systems of forces.

Quantum fields are generated by the spin of the source particles, which is essentially the rotation of the structure formed by the elements of the fabric of space. The quantum field of a particle decrease in intensity with increased radial distance. When particles condense to form an object, their quantum fields merge producing much stronger field around the entire object, hence the relationship between mass and quantum field intensity. Like that of a particle, the speed of rotation, hence the observed magnitude of such a field drops with increased distance from the object. Consequently, an object crossing it experiences acceleration as it nears the source object, hence the concept of warping of space-time and acceleration due to gravity.

It is worth noting that whilst gravity is experienced as a result of crossing the quantum field, in this case considered a gravitational field, the other quantum forces, namely the nuclear, the electromagnetic and the weak forces are much stronger than gravity because they emanate from the negative pressure of exposed background vacuum (mass). Given the hypothesis’ definitions of the fabric of space, matter particles, mass and energy, it is easy to envisage the consequences of particle collisions in high-energy accelerators. Since any volume of background vacuum exposed through the fabric of space is essentially mass, any such volume must be considered a particle of some sort.

Consider the example of air bubbles in a liquid medium, say water, to give analogy to the situation of subatomic particle collisions. If the bubbles are forced to collide at some speed the outcome could be that they breakup into several small bubbles or they could form one large bubble or one might cross the other and they remain the same. The outcome depends upon the speed of collision, which reflects the energy level. However, where this example differs from the reality of the world of subatomic particles is that mass, as a void in the fabric of space, is under negative pressure. As such, it requires stable structure to maintain it. If that structure collapses the particles decay, appearing ultimately as an increased amplitude of oscillation of the surrounding elements of space— i.e., they appear as energy.

Based on this interpretation of the reality of matter, mass and energy, one can appreciate the endless range of particles that might appear in particle collisions as a result of varying collision energies. Therefore, those particles cannot be elementary entities that somehow come together to form larger subatomic particles. They are the broken parts of otherwise stable elementary subatomic particles. They are the divided or merged masses of the original particles, like the air bubbles that breakup or merge in collision.

Therefore, as particle collisions reach higher and higher energies more and more particles will appear. However, except for the four stable particles, namely, the proton and its antiparticle and the electron and its antiparticle, none of the other particles, not even the neutron can remain stable outside the atom. In fact, the case of the neutron is that its structure can remain stable only inside the atom because of the action of the protons either side of it. Unfortunately, details of the structural configuration of subatomic particles, atoms and molecules are beyond the scope of the post, but will be the subject of future posts.

Clearly, the difference between the two types of particles in the standard model, known as force carries (bosons) and matter particles (fermions), is that the former particles have simple structure that maintains their mass, and as such they collapse on encountering matter particles. In the process, they cause the collapse or partial collapse (decay) of matter particles. Bosons are also referred to as carriers of the weak force. In contrast, carrier of the nuclear or strong force, namely the gluon is protected by stable structure in the atomic nucleus and as such it is much hard to collapse. Photons, which are considered carriers of the electromagnetic force, have no structure whatever. They are effectively three-dimensional solitons which transfer their momentum to the objects on which they collapse, they then rebound and continue to move on at the same constant speed, namely the speed of light!

Quantum Entropy

This post follows on from my previous one, Systemic Behaviour of Matter Particles, which was based on a hypothesis that describes physical reality in terms of the behaviour of the fabric of space and the interaction of its constituents. It defines the fabric of space as a medium of oscillating spherical and massless elements that give rise to matter particles as dynamics structures in spin motion, and it defines energy as the motion of those elements. Thus, a stable matter particles maintains spin and a quantum field around itself in the form of elements of space rotating around it in the direction of spin. In this post, I shall continue to explore those particles as thermodynamic systems and investigate their compliance with the laws of thermodynamics.

In the previous post, I concluded that a matter particle is a closed system, because energy crosses its boundaries in the form of emitted and absorbed photons. This confirms that its complies with the first law, which states that a closed system requires energy input to undergo a process. However, for a particle to remain active as a stable system, and not decay, it must be continually receiving energy from the surroundings at some rate and using it to do work, or losing it to the surroundings in some other way, at the same rate, because any imbalance between energy input and output causes the state of the system (particle) to alter and eventually break down (decay). Reflecting on the effects of matter in space, as I shall explain later, it is easy to draw analogy with the case of the cylinder-piston system discussed in the previous post— shown here in the sketch, a particle of matter as a closed system is continually doing work and losing thermal energy to the surroundings. The important point to note here is that a matter particle complies with the first law on the evidence of photon absorption and emission.

The zeroth law, which is referred to as such because it was introduced after the first law, but recognized as more fundamental, states that any two systems in thermal equilibrium with a third, are in thermal equilibrium with one other. This law forms the basis of temperature measurement and in view of the hypothesis referred to above, it suggests a physical relationship between a matter particle and its quantum field. That relationship could be inferred from the transfer of thermal energy between matter systems in space in time. Since systems reach thermal equilibrium through the propagation and merging of their quantum fields, it follows that those fields represent the propagation of excitation through the fabric of space induced by the particle’s dynamics. Therefore, thermal equilibrium between particles represent a state of uniform excitation of the fabric of space, but more importantly it confirms that a particle could not reach thermal equilibrium with its quantum field. According to the second law, for that to happen, the particle has to decay because a state of thermal equilibrium demands that the distribution of the particle’s components be the same as those of the surrounding field.

The second law tells us that the energy of an isolated system tends to even out in time. In other words, its components tend to a state of thermal equilibrium, which is a state of maximum entropy. This confirms that a stable matter particle cannot be an isolated system, for if it were, its energy would even out soon after it comes into existence and would then cease to function as a particle, i.e., it would decay and vanish. It is interesting to note that whilst the second law refers to the cessation of processes at thermal equilibrium regardless of temperature, the third law refers to the cessation of all processes at absolute-zero, which must include processes that maintain subatomic particles systems. One statement of the third law says “all processes cease at absolute-zero temperature”. Since the two laws refer to the cessation of processes, they are essentially referring to systems reaching maximum entropy. The subtle difference is that the former relates to processes of systems at any level of complexity, while the latter relates specifically to the processes that maintain elementary matter particles as systems.

Despite the notion of zero-point energy[1], absolute zero signifies the absence of thermal energy. Although disputed by physicists, it is appropriate to point out a conflict between the definition of zero-point energy and the third law of thermodynamics. The conflict arises because the current interpretation of zero-point energy allows for some residual energy in matter at absolute-zero. However, since the third law asserts that all processes cease at that temperature, there can be no residual energy at absolute zero.[2] In fact, matter itself could not exist at that temperature. This apparent conflict may not be so significant. However, it points to a clear contradiction in the definition of entropy as I shall explain. Later in this post, I shall point to the reason behind this conflict, which relates to a misunderstanding of the difference between lack of temperature and absolute zero temperature.

Another statement of the third law says “The entropy of a perfect crystal is zero when the temperature of the crystal is equal to absolute zero.” The current understanding of this statement is that the crystal must be perfect, because any disorder means the existence of entropy. However, this implies that temperature is somehow independent from the state of order of a system, which is not the case, as the relevant mathematics clearly shows and the units of entropy are those of energy divided by temperature. In a future post, I shall explain how the order or lack of it with respect to entropy relates not to the crystal matter particles, but to the elements of the fabric of space forming those particles, as a form of disorder in the otherwise ‘ordered’ fabric of space. As such, the mere existence of matter demands the existence of temperature difference between it, as a state of disorder, and the surrounding fabric of space as an ‘ordered’ medium. In effect, wherever matter is present, the existence of entropy is inevitable.

Absolute entropy is defined as a state at which the entropy is zero and it is reached when matter is at absolute zero. However, this cannot be the case even if it were possible for matter to reach absolute zero, because according to the third law, a crystal at absolute zero is at thermal equilibrium, which is state at which no process could take place and must therefore be a state of maximum entropy, not minimum entropy. In fact, the relevant but gravely misinterpreted mathematics show that absolute entropy is a state of infinite entropy. For an adiabatic process[3] of ideal gas entropy is directly related to temperature until temperature reaches absolute zero. At absolute zero, the entropy of the system becomes infinite— i.e., indefinable, as one of the terms in the equation becomes infinite.[4] This is interpreted as signifying the inapplicability of the equation at absolute zero and used as an argument that matter cannot reach absolute zero. The basis of the interpretation is that at absolute zero the change in the volume of the system become negative infinity and therefore the equation is not valid. However, the change in volume is insignificant and for all intents and purposes could be considered zero, not negative infinity. Therefore, although the equation confirms that matter cannot reach absolute zero temperature, it is applicable and proves that at absolute zero the entropy of the system is infinite, meaning indefinable, but not zero.

The impossibility of matter reaching absolute-zero suggests that stable matter particles are systems the dynamics of which could only be terminated when they decay.[5] Such structures can be destabilized and can be forced to collapse or may cause the development of new particles from the surroundings if excited to high levels of energy. For example, particles in high energy collision, or when they encounter their antiparticles—i.e., particles of the same structure but opposite angular momentum, they decay. In the case of the former, new particle may appear in the surroundings.

We can conclude that while the entropy of a system at the classical level is subject to the first and second laws, the entropy of its quantum-level components are subject to the third law, with temperature at absolute-zero signifying an indefinable state of entropy. The incorporation of quantum-level entropy in the total entropy of a system is reflected in the appearance of temperature in the relevant mathematics and the units of measurement of entropy (Joules/Kelvin)[6], which are of energy per unit temperature.

Heat and Temperature

The hypothesis referred to above, defines thermal energy as the oscillatory motion of the elements of the fabric of space, and it defines temperature as a measure of their amplitude of oscillation, which is the maximum distance between the elements as they oscillate. Therefore, isothermal processes— i.e., processes in which volumetric changes takes place at constant temperature, signify the geometric reconfiguration of the structure of matter particles at a constant amplitude of those elements. Although temperature is related to the level of excitation of matter particles, it is essentially a measure of the amplitude of oscillation of the elements of the fabric of space in the quantum field of the particles. This can be inferred from three phenomena.

The first is the volumetric change of matter which accompanies changes in temperature other than in some instances that reflect geometric reconfiguration of the of the particles. The second is that different size molecular matter particles oscillate with different frequencies and amplitudes, depending upon their mass and crystal structure, yet they presumably coexist at thermal equilibrium. This indicates that matter particles are rigid structures and that their excitation is controlled by their quantum fields. The third is inferred from the existence of cosmic microwave background radiation (CMBR), which is essentially a measure of temperature of space, which is essentially a measure of the level of excitation of the fabric of space.

Quantum Work

Applying the first law to a particle as a closed system demands that the particle is receiving energy, and like the cylinder-piston system referred to above, the particle must also be doing work and losing heat to the surroundings. Furthermore, for a particle to remain an active system, the surroundings must be reciprocating that work done on them by the particle. In the case of the cylinder-piston, the piston returns to its initial position when thermal equilibrium with the surroundings is reached. In the absence of thermal equilibrium pressure difference between the system and the surroundings causes one to do work on the other, and thus the arrangement remains active as a system. Therefore, for a matter particle to remain an active system— i.e., a particle, thermal energy must be continually fluctuating across its structure, maintaining its perpetual interaction with the surroundings in the form of fluctuating temperature and pressure in the particle’s locality. At this stage, it suffices to say that energy and work fluctuate across a particle in a similar way that a pendulum remain active under the effect of gravity. Therefore, I shall refrain from considering the work done by the surroundings on the particle at this stage.

We know that matter induces gravitational and magnetic fields and emits electromagnetic waves. If we accept those phenomena to be effects resulting from matter’s interaction with the surroundings, then gravitational and magnetic fields represent quantum work done by matter on the surroundings, while the electromagnetic waves, in the form of absorbed and emitted photons, represent thermal energy input and energy losses to the surroundings. At this juncture, it is important to clarify the concept presented here, for they appear contrary to some fundamental laws of physics.

The law of conservation of energy says “energy can neither be created nor destroyed in an isolated system”. The current understanding is that photons are energy quanta that have been in existence since the moment the universe came into existence in one cosmic explosion— the big bang, and they have since been travelling at the speed of light across space in all directions and absorbed and emitted by matter particles!  Of course, in view of the conclusive evidence of the expansion of the universe and its accelerating rate, that cannot be the case. If it were, the density of matter and energy in space would not be constant in time. The proposal here, based on the hypothesis referred to above, is that matter particles are closed (not isolated) systems and that as well as absorbing photons they do create new ones as they interact with the surroundings.

The rate of photon creation is a function of temperature up to some energy level, beyond which the particle themselves would decay. In other words, the mechanics of the elements of the fabric of space forming matter particles result in generating photons with a frequency and intensity directly proportional to their level of excitation, which itself is subject to a limit beyond which molecular matter particles decay. In particle physics, that represents the level at which the electromagnetic and the weak forces are unified as the electroweak force. In addition to creating photos, matter particles result in the creation of more matter and antimatter in space. The process by which matter is created is beyond the scope of this post. Here, suffice to say that the observed the observed expansion of the universe is a consequence of matter and energy creation and that the rate of matter creation is a function of the amount of mater in existence, hence the accelerated rate of expansion.

The idea that the fabric of space absorbs thermal energy losses from matter means temperature in space must be continually on the increase. On the face of it, this may appear to be the case, but on reflection one could argue to the contrary. Rather than heating the surroundings, matter particles cause them to cool, because their energy input comes from the surroundings. However, this cannot be the case, because it conflicts with the second law, which maintains that heat flows from high temperature to low temperature, which cannot be the case if matter particles are receiving energy from the surroundings to remain active. Therefore, for matter particles to be receiving energy from the surrounding, their temperature must always be below that of the surroundings. This however poses an obvious and an important question. How could a system be continuously receiving energy from its surroundings, which it then puts back into them in the form of work done and heat losses? This can only mean that matter particles are perpetual motion machines,[7] and that their constantly lower temperature is dictated by the nature of their structural configuration.

The hypothesis referred to above defines the mass of a particle as the background vacuum, which is exposed through the fabric of space by the mechanics of the elements of the fabric of space forming the particle. This is the same as string theory’s definition of mass, though string theory does not give any details of the mechanics of the strings forming the particles. Furthermore, the hypothesis maintains that in the absence of matter, the fabric of space is at neutral pressure and the pressure distribution around a matter particle is such that the mass at the center is under negative pressure and the elements forming the shell structure surrounding it are under positive pressure. Therefore, other than its effect on the surroundings, e.g., suction force, mass has no physical characteristics whatever, and therefore it has no temperature. It is effectively a singularity in the otherwise continuum of the fabric of space. It is important to note that this lack of temperature does not mean mass is at absolute zero, because absolute zero is the existence of the fabric of space with zero amplitude of oscillation. This distinction between lack of temperature in mass and matter at zero temperature explains the apparent conflict between the third law and absolute entropy, which I referred to above.

Although the hypothesis defines matter particles as perpetual motion machines, we shall continue to regard them as closed systems until we explain their exact mechanics in a future post. To complete our investigation of quantum entropy here, it is appropriate to explain the physical relationship between temperature and pressure in terms of the proposed fabric of space.

Temperature and Pressure

If we accept that stable matter particles increase thermal energy in space, we need to explain the existence of cold matter, e.g. planets and remnants of exploded stars. To do that we should have full understanding of the particles inner mechanics and the processes they undergo to remain active systems. So far there has been no theory that describes the inner mechanics of particles. In fact, the difficulties encountered in trying to explain the interaction between particles in the atom, let alone the particle’s own inner mechanics, led to the development of the current quantum theory with all its apparent ambiguities. And although the hypothesis referred to above outlines the development, structure and mechanics of subatomic particles, its details have not yet been explained here, and therefore we cannot benefit from it here. As such, to understand the role of the fabric of space in the relationship between temperature and pressure at this stage, we can only investigate the effect of particle interaction on the fabric of space, then relate that to the temperature changes in matter.

We know that the condensing of matter, be it due to applied external forces or due to gravity, intensifies the pressure, which results in increased temperature. The reason is well understood and is simple. It is basically the result of particle agitation. Why and how do particles become agitated when condensed has not been explained by any current theory in any physical terms. However, in light of the proposed hypothesis a conceptual model of what take place could easily answer these questions. By relating a particle’s electric polarity to its spin direction and defining its quantum field as the angular motion of the elements of space caused by the spin of the particle in its immediate surroundings, we can begin to understand the cause of the agitation. It basically means that any two particles having the same spin direction would cause the elements in their respective quantum fields to clash as the particles are forced closer to one another. The shorter the distance between them, the greater the depth of merging of their quantum fields, and hence the level of agitation experienced by the elements in those fields and consequently by the particles themselves.

The resulting excitation produces increased amplitude of oscillation of the elements in the fields and the particles alike. However, unlike the particles, which are constrained by the physical boundaries or by the gravity, the elements of space are not confined by any such restrictions. In effect, the increased amplitude of the elements reduces their number and exposes greater background vacuum, which allows room for more particles to oscillate at higher amplitudes in the same volume. This condensing of particles leads to increase agitation and acceleration of the particles due to increased repulsion force between them, which translates to increased pressure. Therefore, forcing matter particles into a smaller volume results in increased density of matter particles and reduced density of the fabric of space between them, which explains the increase in temperature as the amplitude of oscillation of the elements of space and the associated increased in pressure as increase in the repulsive force between the particles in a given volume of space.

To increase the temperature of molecular matter other than by increasing the pressure requires the introduction of elements of space with higher amplitude of oscillation to a given volume of particles. Such elements could be introduced directly in the form of high energy electromagnetic radiation or highly excited matter particles. It should be noted that nuclear reactions increase the amplitude of oscillation of the elements of space in the particles’ surroundings due to the collapse and merging of the mass of the particles as the exposed background vacuum within those particles.

The relationship between temperature and pressure in terms of the behaviour of the fabric of space explains the reason behind the higher temperature in a star’s atmosphere in relation to its surface. Particles of matter in the atmosphere of a star are less constrained by the effect of gravity than those bound to the surface by the greater gravitational effect. Thus, the number of particles per unit volume is much less in the atmosphere, which enables greater amplitude of elements of space between the particles. In effect, the density of both matter particles and elements of space can increase. Thus, whilst the registered pressure is lower, the temperature, which is a measure of the amplitude of the elements of space could be much higher.

Having established the relationship between temperature and pressure in physical terms, we can now turn to the question of why there is cold matter in the universe despite matter’s continual production of thermal energy in space. So, let us consider an object from scattered matter of an exploding star.

When such object is displaced through space following the explosion, the matter particles within it are under high pressure and elements of the fabric of space contained within it have very high amplitudes. As it is displaced through space where the elements have very low amplitude thermal equilibrium begins to take place from the surface inwards. In physical terms, that equilibrium reflects the interaction between the high amplitude elements within the hot object and those with low amplitude in the surroundings. In effect, the amplitudes begin to even out. Thus, the pressure imposed on the object prior to the explosion begins to be alleviated and matter particles begin to readjust their proximity from one another. As the amplitude of the element of space decreases, their density increases. In time, the increase in density of the elements of space between the particles propagate through the object from the surface towards the centre. It should be noted that the bond between matter particles is a function of the density of the particles of space between them, so that the greater the density, the greater the bond between the particles.

This cooling process does continue indefinitely, for eventually heating takes over and the process of star development takes over. In time, due to the ongoing magnetic and gravitational activities, which result in matter accretion and creation, the mass of the object and consequently its gravity result increasing its temperature. Thus, the once cooled down object develops to end up a new hot star as nuclear reactions recommence due to increased temperature and pressure.

[1] In this context, zero-point energy is defined as the lowest possible energy level (ground state) that a matter system reaches at absolute zero temperature. It also refers to the vacuum energy.

[2] Einstein-Stern’s paper 1913, gives the equation for residual energy for an oscillator at absolute zero, Є = ( / e/kT -1) + (/2), where, h is Planck constant, ν is the frequency, k is Boltzmann’s constant and T is absolute temperature. The interpretation of this equation is that at [T =0], the first term becomes zero and the residual energy in the system equals [/2]. However, defining thermal energy as the frequency v, and the amplitude of oscillation as the temperature T, means that when the amplitude (temperature, T) is zero, the frequency (thermal energy, v) is also zero, and consequently the residual energy Є is zero. It is worth noting that Einstein-Stern’s results were based on experimental data. Given the minute energy level of an oscillator close to absolute zero, makes their conclusion, hence their interpretation of the above equation unreliable.

[3] An adiabatic process is one that takes place without the transfer of thermal energy between a system and its surroundings.

[4] To take a system from T1 to a lower temperature T2 in an adiabatic process, the equation referred to is, ∆S = Cv ln (T2/T1) + R ln (V2/V1) ≥ 0. If T2 = 0, the term [Cv ln (T2/T1)] equals minus infinity, in which case the second term [R ln (V2/V1)] must equal infinity. This result is dismissed as impossible. However, it indicates that in the absence of matter particles, entropy in the fabric of space is at maximum— i.e. infinite, because if matter could exist at absolute zero, then at [T2 =0], [V2 = V1] and [ln (V2/V10 = 0]. This result should not be confused with the case of entropy in the absence of the fabric of space itself, because in the absence of the fabric of space, neither entropy nor any other physical property exists.

[5] To extract heat from a system, the system is treated as a heat-source. In the process, work must be done on it to transfer the heat to a heat-sink. If the temperature of the source equals that of the sink, then according to the zeroth law, the two are in thermal equilibrium, and there can be no heat flow from the source to the sink. To lower the temperature of the source to absolute-zero, the temperature of the sink must be below absolute-zero, which means it must have a negative value. However, a measure of energy, temperature cannot be negative.

[6] The entropy (S) of a system is given by [S = Q/T], where Q is the heat and T is the absolute temperature. The entropy change in a process, dS, is given by [dS = (δQ/Tc) − (δQ/TH)], where δQ is the heat transferred to the system and Tc and TH are the temperatures of the heat source and the heat sink, namely the surroundings. Therefore, the unit of measuring entropy is joule/kelvin, which represents a measure of energy per unit temperature.

[7] A perpetual motion machine is a hypothetical machine that works indefinitely without any energy input. Such a machine violates the first and the second laws of thermodynamics.

Systemic Behaviour of Matter Particles

This post is based on a hypothesis which describes physical reality in terms of the behaviour of the fabric of space and the interaction of its constituents. It is the subject of a book titled ‘Physical Reality – the fabric of space’. The hypothesis defines the fabric of space as a physical medium of discrete spherical elements permeating all space, and oscillating at an invariable period of Planck time. The diameter of an elements is the Planck length. As such, their frequency is constant and their amplitude of oscillation is independent of their frequency, and reflects temperature. The hypothesis defines energy as the motion of the elements of the fabric of space, be it oscillatory or curvilinear, and it defines matter particles as dynamic structures that form from those elements. Thus, quantum fields reflect the behaviour of the elements of the fabric of space in the immediate surroundings of the particles, which result from their interaction with the fabric of space. The hypothesis defines all other properties of matter particles, including electric charge and quantum spin number in terms of the mechanics of the elements of space forming the particles. However, it defines mass as the exposed background vacuum.

Considering individual matter particles as thermodynamics systems may seem a farfetched idea. The main reason is that the structure of subatomic particles has remained ambiguous and detached from the fabric of space with which it interacts. Based on the proposed hypothesis, it will become clear that subatomic particles are essentially systems the inner working of which is governed by the laws of thermodynamics. However, before I appeal to the laws of thermodynamics to define matter particles as thermodynamic systems, it is appropriate to define what is meant by a system and outline the different types of thermodynamic systems.

A system is a set of interactive components within common boundaries performing a task in surroundings that none of the components can perform independently.

In thermodynamics, a system is classified as one of three types:

  1. Open, if both matter and energy can cross the boundaries of the system.
  2. Closed, if only energy can cross the boundaries.
  3. Isolated, if matter and energy cannot cross the boundaries.

The behaviour of a system depends as much on its inner workings as it does on conditions in its surroundings.[1] As conditions change, a system may reach a state whereby it can no longer function. Therefore, considering matter particles as systems raises two questions. First, what type of system are they? The second, assuming matter particles are affected by conditions in the surroundings, what are the limiting conditions that can causes them to breakdown and cease to function as matter particles?

At the level of individual subatomic particles, matter cannot be considered an open system, because according to Pauli’s exclusion principle in quantum mechanics, matter particles cannot occupy the same space simultaneously, and therefore they do not cross one another’s boundaries. As such, they cannot be considered open systems.

To determine which type they are, we need to investigate their relationship with energy. If energy crosses their boundaries, then in line with the above definitions of thermodynamics systems, they must be closed systems.

Considering matter’s interaction with light, we need dig no further in our investigation, as clearly matter particles emit and absorb photons, which are quantized form of energy. On the merit of this evidence alone, it appears that matter particles are closed systems, and to confirm this let us consider an object in space, as a collection of matter particles—e.g., our planet. As a system, we can define Earth’s boundaries as the outermost layer of the atmosphere. If we ignore the slight increase in its mass due to the fall of matter particles from space, then the planet constitutes a closed system in which the energy entering the system from the sun equals the energy dissipated into space. Of course, the greenhouse effect, which is hotly debated, is concerned with the entrapment of energy in the system, causing a net increase in the internal energy. The objective here is not to contribute to that debate but to emphasize the fact that matter represents a closed system, and as such, its interaction with the surroundings is limited to within a range of conditions beyond which its state begins to change.

It is important to note that unless there is equilibrium between energy input and output, the state of matter will continue to alter. At the classical level that change is reflected in phase change. However, at the quantum level, which is the level of subatomic particles, the change is reflected in the level of particle excitation, leading to instability and ultimate decay. Therefore, if energy input into molecular matter continues to exceed the energy output, change will continue to take place until all particles breakup into their elementary constituents because of increased excitation. Any further increase in energy input destabilizes the elementary particles causing them to decay to energy, as is evident from high energy particle-accelerator collisions.

However, if energy output from matter continues to exceed input, a stage will be reached at which matter is at its lowest possible energy level. At that level its interaction with the surroundings will be at absolute minimum, and no amount of work could extract the remaining energy, which is referred to as zero-point energy.[2]

Since the definition of matter as a closed system appears to apply to subatomic particles as well as molecular matter, we can generalize it to include matter in all its phases. We can now refine our earlier definition of a system to apply specifically to a closed system, thus:

A closed system is a set of interactive components within common boundaries performing a task in surroundings that none of them can perform independently. In the process, energy and work continually cross the system boundaries in opposite directions.

Based on this definition, a closed system is clearly characterized by fluctuating energy levels. Since energy and work continually cross the boundaries of a closed system, energy fluctuations in the system must be mirrored in the surroundings. The extent of energy fluctuation a system can cope with reflects the range of stable states within which it can function, so that the wider the range, the greater the adaptability of the system to changes.

System stability and adaptability lead us to the concepts of entropy, irreversibility, and chaos, which are fundamental to understanding systemic behaviour in thermodynamics. I shall now turn to explain entropy and then relate it to the space fabric described above. In a future post, I shall consider the entropy of individual subatomic particles and in a subsequent one, I shall explain the reason behind the law entropy of the universe.


In thermodynamics, entropy is a measure of the irreversibility of processes.[3] It reflects the extent of utilization of useful or usable energy[4] in a system. In a closed system, it is a function of temperature difference between the system and its surroundings. However, in an isolated system, it is a function of temperature difference between different parts of the system. Entropy is therefore associated with thermal equilibrium.

In statistical mechanics, entropy reflects the statistical distribution of a system’s microscopic components. The common ground between the two definitions of entropy is they are indicative of the level of kinetic energy of the microscopic components of a system. The distribution of the micro components reflects their level of excitation, which in turn reflects their kinetic energy level. Therefore, the drive towards uniform energy across a system or between a system and its surroundings is essentially a drive toward reaching a state of uniform distribution of the microscopic components.

When the statistical distribution of the microscopic components of a closed system evens out with that of the surroundings, the system reaches maximum entropy, a point at which energy transfer across the system’s boundaries has no definite direction, and as such no further work could be done by the system on the surroundings, or vice versa. Although a system’s microscopic components are essentially matter particles, the existence of a physical space fabric of discrete elements, demands that those elements form part of that distribution, as I shall explain.

A typical example of a process that explains the concept of entropy of a closed system is that of the cylinder-piston arrangement shown in fig. 2.1. The three different positions of the piston in the figure describe a complete cycle. In position one, conditions in the system and the surroundings are the same—i.e., the distribution of the microscopic components of the system and surroundings, namely, air inside and outside the cylinder is the same. That reflects thermal equilibrium between the system and the surroundings. Therefore, no work can be done by the system or the surroundings. The entropy of the system is at maximum.


In position two, work is done on the system by placing weight on the piston. This results in a small displacement of the piston, and energy transfer to the system in the form of work, which lowers its entropy. Distribution of microscopic components of air in the system is slightly denser than that of the surroundings. As such, they have higher kinetic energy, giving the system the potential to do work on the surroundings. Compressing the air prior to the final cycle results in increasing the energy difference between the system in its initial state and the surroundings—i.e., the useful energy.

In position three, the fuel ignites and energy is suddenly released, resulting in greater change in conditions inside the cylinder—i.e., change in the statistical distribution of the microscopic components of the system in relation to those in the surroundings. In the process pressure on the boundaries causes the piston to do work on the surroundings.
If the system undergoes one cycle only after which the weight is removed, heat would continue to escape through the cylinder walls until the system and the surroundings are in thermal equilibrium again. The system then returns to position one, which is a state of maximum entropy. Alternatively, the piston could be force to position one or position two though work done on it by a second system, and the cycle is repeated. However, thermal energy loss through the boundaries is essential for the system to remain active, because if heat is retained after each cycle, it would cause a system meltdown. Therefore, as we mentioned in the definition of a closed system, fluctuation in energy levels in the form of heat transfer and work done in both directions across a closed system’s boundaries is essential for the system to remain active. In effect, energy input is balanced by work done and heat losses.

Work done and energy loss are indicative of a system’s work toward reaching a state of rest. If not for the continual energy input, a system would exhaust its energy and come to rest. In other words, if energy input into a system were terminated, the system would eventually reach maximum entropy. In fact, all systems work toward reaching that state. This is evident from the behavior of all systems in nature, from the simple pendulum to the complex weather system. The former tries to reach a state of static equilibrium, and the latter is continually working to even out the temperature and pressure distribution to reach a state of equilibrium. Both systems fail to reach equilibrium, and thus they remain active. Their failure stems from their inherent instability, which produces uncontrollable fluctuation in energy crossing their boundaries. If the energy across an entire system evens out, the system would cease to function and would no longer be distinguished as a system.

In the case of the cylinder-piston system, if the flow of energy in one direction were interrupted, the state of the system would continue to change until eventually the system breaks down. Thus, it remains a collection of metal structure, but not a system. The reason it remains as a structure and not vanish is because each of the constituent particles of its structure is a system interacting with the surroundings. If the distribution of those microscopic constituents, right down to the subatomic particle level are the same as the surrounding, the system would be indistinguishable from the surroundings, and as such it would not exist.

In fact, we can go a step further to say, if the distribution of the elements of the fabric of space forming subatomic particles were to be the same as those in their surroundings, the particles would not exist because they would be indistinguishable from their surroundings. As such, the entropy of individual matter particles, which we shall refer to as quantum-level entropy, reflects the distribution of the elements of the fabric of space forming those particles. That distribution, as it will become apparent, is a measure of heat—i.e., it represents temperature.

To delve deeper in our exploration of matter as a closed system, I shall investigate the entropy of matter at the level of individual particles—i.e., quantum level, and subsequently at the level of the universe. Before I do that in the next post, it would be helpful to answer the question of why elements of the fabric of space are undetectable.

As a principle, an observation takes place if, and only if, the observed object or phenomena produce an effect in the surroundings, because any observation must involve some form of interaction between the observed and the observer. The interaction takes the form of signals emitted by or reflected on the observed that are then detected by the observer. The signals are transmitted through surroundings common to both. In the absence of continuous common surroundings signals would not reach the observer. Since an individual element of space in isolation does not, and cannot produce an effect to distinguish it in the surroundings, it cannot be detected.

Considered from systems’ perspective, no individual element constitutes a system, because a system is a collection of components. Therefore, an individual element, as the only elementary (indivisible) object in existence does not constitute a system and as such, it is indistinguishable in the surroundings and consequently, it cannot be detected. Therefore, a subatomic particle, as an elementary structure forming from those elements represents the most basic of all systems in existence and if destabilized, it would decay to its undetectable components.


[1] Surroundings in this context refer to a thermodynamic system’s surroundings, which includes everything external to the system’s boundary regardless of whether the boundary is physical or conceptual.

[2] Zero-point energy is the lowest possible energy level that a matter system can reach at absolute-zero temperature.

[3] A reversible process is one in which the changes experienced by a system in transition from one state to another, say A to B, are experienced by the system in reverse order at infinitesimal steps. In effect, a reversible process is cyclical and path dependent, so that when a system undergoing such a process it can return to each any every state it experienced at the micro level but in exact reverse order. As such, a reversible process requires infinite time to complete.

[4] The term usable energy refers to the difference in energy levels between a system and its surroundings, which enables work to be done by a system on the surroundings or vice versa.

Why does E/m = c^2?

Einstein’s famous equation E = mc^2 encapsulates the relationship between mass and energy. However, explaining the appearance of the square of the speed of light as a constant of proportionality proved to be an insurmountable challenge for physicists. They could only point out the colossal energy contained in a small amount of mass, which is immediately obvious from the magnitude of the speed of light. But, why the speed of light? What is its significance in the relationship? And why squared?
These and many other questions in physics remain unanswered, because the source of physical reality and the effect it has on all physical phenomena have been totally discarded. Therefore, phenomena such as matter, mass, energy, electric charge, light, gravity, time, etc., treated as independent quantities that arise out of nothing!

In this post, I shall reveal the reason behind the appearance of the speed of light [c], in Einstein’s equation and explain why it has to be squared. This, I shall do with reference to the fabric of space and the nature of matter and energy as quantities derived from that fabric. And in order to save the reader going over previous posts for the relevant information, I shall begin by defining those parameter. However, for the sake of brevity, I shall not explain why those definitions hold true. Continue reading “Why does E/m = c^2?”

Modelling Supernovae & Black Holes!

Given the existence of a space fabric as a fluid medium with which matter interacts, it is possible to physically model many physical phenomena at both quantum and galactic levels. Discarding the existence and effect of such a medium is the main reason behind the irreconcilability of some theories and inexplicable behaviour of objects at both quantum and galactic levels.

To develop a conceptual model of a system, the conscious mind begins by linking simple concepts to form mathematical relations. For example, by realizing that the flow rate from a water tap depends on the number of tap turns, a mathematical model is developed. It is then possible to relate the volume of water collected to the duration it takes to collected it, for a given number of tap turns. Relating a system’s variables to each other correctly is all that is needed to develop a mathematical model. This simple water-tap example could be extended to predict the flow rate of water through any pipe. To do that, the model must include all relevant parameters that affect water flow, which include pressure head, pipe diameter, length, and surface roughness. Continue reading “Modelling Supernovae & Black Holes!”

π in the sky!

In this post I shall discuss the nature of π as a mathematical constant and reveal its relationship with the fabric of space. As an irrational number π represents the ratio of a circle’s circumference to its diameter. An irrational number is a real number that cannot be expressed as a ratio (a/b), where (a) and (b) are integers and (b≠0).

Returning briefly to the cubical universe, which we considered in a previous post, if the observer there begins to probe his world at the level of the individual cubes defining his space and decides to form different geometries at that level, he could do so only by using those cubes. He would have no other means. Using cubes to define circles, he would soon discover that the geometric properties of his circles vary according to the orientation of the cubes. For example, the number of elements defining the diameter of the same circle could vary depending upon the orientation of the cubes in the circumference. Therefore, in a universe defined by cubical elements π, as the ratio of the units of length of a circle’s circumference to that of its diameter, cannot be constant. Continue reading “π in the sky!”

Complex Numbers Unravelled

In mathematics and in physics, complex numbers are considered mysterious. Although they are essential to solving fundamental problems in science and engineering, their true relationship with the physical world remains ambiguous. For example, why do they have two components, which are referred to as real imaginary parts? Why is the imaginary part closely linked to the square root of minus one? And most importantly, what is their relationship with physical reality. This post answers those questions.

In this post, I shall unravel one of the mysteries of mathematics, that is the mystery of complex numbers. In future posts, I shall unravel other mysteries with reference to the basic elements of physical reality. They include infinity and irrational numbers of which (π) is considered most mysterious. Continue reading “Complex Numbers Unravelled”