This post is based on a hypothesis which describes physical reality in terms of the behaviour of the fabric of space and the interaction of its constituents. It is the subject of a book titled ‘Physical Reality – the fabric of space’. The hypothesis defines the fabric of space as a physical medium of discrete spherical elements permeating all space, and oscillating at an invariable period of Planck time. The diameter of an elements is the Planck length. As such, their frequency is constant and their amplitude of oscillation is independent of their frequency, and reflects temperature. The hypothesis defines energy as the motion of the elements of the fabric of space, be it oscillatory or curvilinear, and it defines matter particles as dynamic structures that form from those elements. Thus, quantum fields reflect the behaviour of the elements of the fabric of space in the immediate surroundings of the particles, which result from their interaction with the fabric of space. The hypothesis defines all other properties of matter particles, including electric charge and quantum spin number in terms of the mechanics of the elements of space forming the particles. However, it defines mass as the exposed background vacuum.
Considering individual matter particles as thermodynamics systems may seem a farfetched idea. The main reason is that the structure of subatomic particles has remained ambiguous and detached from the fabric of space with which it interacts. Based on the proposed hypothesis, it will become clear that subatomic particles are essentially systems the inner working of which is governed by the laws of thermodynamics. However, before I appeal to the laws of thermodynamics to define matter particles as thermodynamic systems, it is appropriate to define what is meant by a system and outline the different types of thermodynamic systems.
A system is a set of interactive components within common boundaries performing a task in surroundings that none of the components can perform independently.
In thermodynamics, a system is classified as one of three types:
- Open, if both matter and energy can cross the boundaries of the system.
- Closed, if only energy can cross the boundaries.
- Isolated, if matter and energy cannot cross the boundaries.
The behaviour of a system depends as much on its inner workings as it does on conditions in its surroundings. As conditions change, a system may reach a state whereby it can no longer function. Therefore, considering matter particles as systems raises two questions. First, what type of a system are they? The second, assuming matter particles are affected by conditions in the surroundings, what are the limiting conditions that can causes them to breakdown and cease to function as matter particles?
At the level of individual subatomic particles, matter cannot be considered an open system, because according to Pauli’s exclusion principle in quantum mechanics, matter particles cannot occupy the same space simultaneously, and therefore they do not cross one another’s boundaries. As such, they cannot be considered open systems.
To determine which type they are, we need to investigate their relationship with energy. If energy crosses their boundaries, then in line with the above definitions of thermodynamics systems, they must be closed systems.
Considering matter’s interaction with light, we need dig no further in our investigation, as clearly matter particles emit and absorb photons, which are quantized form of energy. On the merit of this evidence alone, it appears that matter particles are closed systems, and to confirm this let us consider an object in space, as a collection of matter particles—e.g., our planet. As a system, we can define Earth’s boundaries as the outermost layer of the atmosphere. If we ignore the slight increase in its mass due to the fall of matter particles from space, then the planet constitutes a closed system in which the energy entering the system from the sun equals the energy dissipated into space. Of course, the greenhouse effect, which is hotly debated, is concerned with the entrapment of energy in the system, causing a net increase in the internal energy. The objective here is not to contribute to that debate but to emphasize the fact that matter represents a closed system, and as such, its interaction with the surroundings is limited to within a range of conditions beyond which its state begins to change.
It is important to note that unless there is equilibrium between energy input and output, the state of matter will continue to alter. At the classical level that change is reflected in phase change. However, at the quantum level, which is the level of subatomic particles, the change is reflected in the level of particle excitation, leading to instability and ultimate decay. Therefore, if energy input into molecular matter continues to exceed the energy output, change will continue to take place until all particles breakup into their elementary constituents because of increased excitation. Any further increase in energy input destabilizes the elementary particles causing them to decay to energy, as is evident from high energy particle-accelerator collisions.
However, if energy output from matter continues to exceed input, a stage will be reached at which matter is at its lowest possible energy level. At that level its interaction with the surroundings will be at absolute minimum, and no amount of work could extract the remaining energy, which is referred to as zero-point energy.
Since the definition of matter as a closed system appears to apply to subatomic particles as well as molecular matter, we can generalize it to include matter in all its phases. We can now refine our earlier definition of a system to apply specifically to a closed system, thus:
A closed system is a set of interactive components within common boundaries performing a task in surroundings that none of them can perform independently. In the process, energy and work continually cross the system boundaries in opposite directions.
Based on this definition, a closed system is clearly characterized by fluctuating energy levels. Since energy and work continually cross the boundaries of a closed system, energy fluctuations in the system must be mirrored in the surroundings. The extent of energy fluctuation a system can cope with reflects the range of stable states within which it can function, so that the wider the range, the greater the adaptability of the system to changes.
System stability and adaptability lead us to the concepts of entropy, irreversibility, and chaos, which are fundamental to understanding systemic behaviour in thermodynamics. I shall now turn to explain entropy and then relate it to the space fabric described above. In a future post, I shall consider the entropy of individual subatomic particles and in a subsequent one, I shall explain the reason behind the low entropy of the universe.
In thermodynamics, entropy is a measure of the irreversibility of processes. It reflects the extent of utilization of useful or usable energy in a system. In a closed system, it is a function of temperature difference between the system and its surroundings. However, in an isolated system, it is a function of temperature difference between different parts of the system. Entropy is therefore associated with thermal equilibrium.
In statistical mechanics, entropy reflects the statistical distribution of a system’s microscopic components. The common ground between the two definitions of entropy is they are indicative of the level of kinetic energy of the microscopic components of a system. The distribution of the micro components reflects their level of excitation, which in turn reflects their kinetic energy level. Therefore, the drive towards uniform energy across a system or between a system and its surroundings is essentially a drive toward reaching a state of uniform distribution of the microscopic components.
When the statistical distribution of the microscopic components of a closed system evens out with that of the surroundings, the system reaches maximum entropy, a point at which energy transfer across the system’s boundaries has no definite direction, and as such no further work could be done by the system on the surroundings, or vice versa. Although a system’s microscopic components are essentially matter particles, the existence of a physical space fabric of discrete elements, demands that those elements form part of that distribution, as I shall explain.
A typical example of a process that explains the concept of entropy of a closed system is that of the cylinder-piston arrangement shown in fig. 2.1. The three different positions of the piston in the figure describe a complete cycle. In position one, conditions in the system and the surroundings are the same—i.e., the distribution of the microscopic components of the system and surroundings, namely, air inside and outside the cylinder is the same. That reflects thermal equilibrium between the system and the surroundings. Therefore, no work can be done by the system or the surroundings. The entropy of the system is at maximum.
In position two, work is done on the system by placing weight on the piston. This results in a small displacement of the piston, and energy transfer to the system in the form of work, which lowers its entropy. Distribution of microscopic components of air in the system is slightly denser than that of the surroundings. As such, they have higher kinetic energy, giving the system the potential to do work on the surroundings. Compressing the air prior to the final cycle results in increasing the energy difference between the system in its initial state and the surroundings—i.e., the useful energy.
In position three, the fuel ignites and energy is suddenly released, resulting in greater change in conditions inside the cylinder—i.e., change in the statistical distribution of the microscopic components of the system in relation to those in the surroundings. In the process pressure on the boundaries causes the piston to do work on the surroundings.
If the system undergoes one cycle only after which the weight is removed, heat would continue to escape through the cylinder walls until the system and the surroundings are in thermal equilibrium again. The system then returns to position one, which is a state of maximum entropy. Alternatively, the piston could be force to position one or position two though work done on it by a second system, and the cycle is repeated. However, thermal energy loss through the boundaries is essential for the system to remain active, because if heat is retained after each cycle, it would cause a system meltdown. Therefore, as we mentioned in the definition of a closed system, fluctuation in energy levels in the form of heat transfer and work done in both directions across a closed system’s boundaries is essential for the system to remain active. In effect, energy input is balanced by work done and heat losses.
Work done and energy losses are indicative of a system’s work toward reaching a state of rest. If not for the continual energy input, a system would exhaust its energy and come to rest. In other words, if energy input into a system were terminated, the system would eventually reach maximum entropy. In fact, all systems work toward reaching that state. This is evident from the behavior of all systems in nature, from the simple pendulum to the complex weather system. The former tries to reach a state of static equilibrium, and the latter is continually working to even out the temperature and pressure distribution to reach a state of equilibrium. Both systems fail to reach equilibrium, and thus they remain active. Their failure stems from their inherent instability, which produces uncontrollable fluctuation in energy crossing their boundaries. If the energy across an entire system evens out, the system would cease to function and would no longer be distinguished as a system.
In the case of the cylinder-piston system, if the flow of energy in one direction were interrupted, the state of the system would continue to change until eventually the system breaks down. Thus, it remains a collection of metal structure, but not a system. The reason it remains as a structure and not vanish is because each of the constituent particles of its structure is a system interacting with the surroundings. If the distribution of those microscopic constituents, right down to the subatomic particle level are the same as the surrounding, the system would be indistinguishable from the surroundings, and as such it would not exist.
In fact, we can go a step further to say, if the distribution of the elements of the fabric of space forming subatomic particles were to be the same as those in their surroundings, the particles would not exist because they would be indistinguishable from their surroundings. As such, the entropy of individual matter particles, which we shall refer to as quantum-level entropy, reflects the distribution of the elements of the fabric of space forming those particles. That distribution, as it will become apparent, is a measure of heat—i.e., it represents temperature, because it reflects their amplitude of oscillation.
To delve deeper in our exploration of matter as a closed system, I shall investigate the entropy of matter at the level of individual particles—i.e., quantum level, and subsequently at the level of the universe. Before I do that in the next post, it would be helpful to answer the question of why elements of the fabric of space are undetectable.
As a principle, an observation takes place if, and only if, the observed object, or phenomenon produces an effect in the surroundings, because any observation must involve some form of interaction between the observed and the observer. The interaction takes the form of signals emitted by or reflected on the observed that are then detected by the observer. The signals are transmitted through surroundings common to both. In the absence of continuous common surroundings signals would not reach the observer. Since an individual element of space in isolation does not, and cannot produce an effect to distinguish it in the surroundings, it cannot be detected because it does not constitute a system.
Considered from systems’ perspective, no individual element constitutes a system, because a system is a collection of components. Therefore, an individual element, as the only elementary (indivisible) object in existence does not constitute a system and as such, it is indistinguishable in the surroundings. Consequently, it cannot be detected. Therefore, a subatomic particle, as an elementary structure forming from those elements represents the most basic of all systems in existence and if destabilized, it would decay to its undetectable components.
 Surroundings in this context refer to a thermodynamic system’s surroundings, which includes everything external to the system’s boundary regardless of whether the boundary is physical or conceptual.
 Zero-point energy is the lowest possible energy level that a matter system can reach at absolute-zero temperature.
 A reversible process is one in which the changes experienced by a system in transition from one state to another, say A to B, are experienced by the system in reverse order at infinitesimal steps. In effect, a reversible process is cyclical and path dependent, so that when a system undergoing such a process it can return to each any every state it experienced at the micro level but in exact reverse order. As such, a reversible process requires infinite time to complete.
 The term usable energy refers to the difference in energy levels between a system and its surroundings, which enables work to be done by a system on the surroundings or vice versa.