Deterministic World

Part 1. Physics

6. Information Couplings

Applying the concepts of information theory to real world experiments shows a seeming contradiction between the law of information exchange and the intuitive feeling that one-way information transfers are all around us. I will focus on the case of a macroscopic measurement (like reading the temperature of a furnace) as such measurements seem to be undeniably one-way. Practically when a measurement is carried out we have an idea of a system (with a spherical boundary) 'on which' the measurement is performed. The goal of a measurement is to tell us about the system, thus the measurement has an explicit dependence on the notion of a spatially bounded system. Measurement, then, is a coupling of information between the system and something outside its boundary, permitting information exchange across the boundary. Surprisingly, it is preferable for measurements to be minimally coupling so as to not perturb the system, as any information coupling leads to exchange with the measurement device/outside and thus disturbs the system. Such a disturbance amounts to an increase in entropy - since partial information exchange (as outlined earlier) will tend to equalize the system and its outside environment (including measuring device), across the system boundaries. So entropy is an essential part of measurement - for net classical information to be manifested at the measurement device, there must be some difference in entropy that the measurement's coupling allows to equalize (the entropy difference then must be maintained for continued measurement). Entropy-free "measurements" form the realm of quantum computing and bring their own challenges. Places where entropy changes occur give the illusion of a one-way coupling (despite elementary exchange being two-way), in the same sense that information is radiated outwards coherently, which is easy to do, while causing coherent inward (space to point) radiation is probabilistically unlikely - partial information exchange still being the culprit here. Because of the irreversibility of entropy rise, any such processes must be regarded as very high information coupling (and high system perturbation), as well as high 'isolation' ie low impact of observer's actions on measurement device. Amplifiers, light bulbs, speakers, even light reflecting off a mechanical dial, are all examples of such an 'isolating'/high information coupling, with entropy change. All measurements that we deal with must eventually couple information into human-sensible forms like visual/auditory/tactile signals. In practical measurements, at least this last stage will be an 'isolating' coupling - since this transfers highly coupled information (big perturbation), to be in accord with the low-perturbation spirit of measurement, our initial/most basic information probe into the system must actually be designed to be low-coupling (ironically) to serve as a limit on the system perturbation, while subsequent stages of the measurement will couple this small information to increasing numbers of particles (entropy-raising) for ease of observation/low risk of perturbation. For example, a thermocouple is used to measure the temperature of a furnace. The furnace boundaries are designed to keep heat in, and disallow heat transfer. To know the temperature we must let that information be coupled to outside the furnace walls, which the thermocouple does by allowing exchange between temperature and electric current, the latter of which is allowed to easily cross the furnace boundary. At this point the measurement is designed to be minimally coupling - so that there is little heat loss from the furnace (through driving a current - same mechanism as Peltier/thermoelectric heating and cooling) and also so any perturbation of the current by external means (ie the measurement device itself) has little effect on the furnace (since at this point the current can just as well be converted to temperature instead - the coupling must be 2-way). Once the low-perturbation information has been transferred out of the furnace though, we can start to make high-perturbation couplings so as to maximally 'transmit' (appearing unidirectionally) the information. The limiting factor on furnace perturbation is effectively what happens between the tehrmocouple tip and its connection at an amplifier - which creates a tiny heat transfer if there is a thermal difference between the two. At the amplifier, the whole of this (small) perturbation is coupled to thousands times more electrons in a coherent way, such that any possible reverse coupling would require coherent feeding of electrons back - which is statistically implausible and is the nature of entropy rise (amplifiers must by their nature cause an entropy rise). Amplifiers must be designed to do this without disturbing the primary thermocouple coupling, ie amplifier temperature/cold-junction compensation. The amplified signal, being 'isolated' at least by an order of output current:input current (in that a perturbation to the latter can only have that fraction of an effect on the former), can be handled much more loosely, such as to drive a high-current LED for visual output. The LED itself is a measurement that couples the current in a thermocouple meter's boundaries to light which can escape these boundaries easily. The LED is a big perturbation on the current, and thus entropy-rising and effectively isolating - there is not much an observer can do to convert the light emitted back into a similar magnitude current. An experimenter (human) can directly observe the light and then the measurement can be considered complete here. A high-coupling and high-perturbation measurement such as the LED can be considered as "system evolution", thus blurring the line between measurement and 'regular' system behavior. Inasmuch as such couplings require an entropy difference, this suggests at least two approaches to measurements - having the system act as either entropy sink or source (for instance, by having the thermocouple junction either hotter or colder than the system being measured).

Propagation of forces - when touching/pushing something, how does it "know" whather to push back or accelerate and how much? This happens by a force propagation throughout the object, which can be interpreted as waves 'feeling' the object boundaries and eventually (with dissipative processes) establishing the equilibrium we observe (such as F=kx or F=ma). The force propagation is a finite-speed and local process (ie each atom knows and responds only to its surroundings, finite-difference type), paralleling that of transmission line theory but usually among multiple dimensions. Like I and V in transmission lines, force propagation conserves F and v (force and relative velocity). The integral I*V and F*v will both give energy, so both theories conserve energy. The two variables establish an equilibrium in a controlled way since they are coupled in an oscillator (there is a difference between an atom at rest in its potential vs an atom traveling at velocity v in its potential - this matters for propagation mechanism while for equilibrium state only E (v*F) is sufficient - not how this explanation justifies the variational energy/virtual work approach used in statics). So, one object impacting another (1D mass/spring) looks like this:

.~~~~.~~~~.~~~~.	.~~~~.~~~~.~~~~.			.~~~~.~~~~.~/\./\./\~.~~~~.~~~~.			.~~~~.~/\./\~.~~.~/\./\~.~~~~.
      ---->										    compress, F=1							    compression wave propagates				
	v=1, F=0			v=0, F=0		

If the impacting object is faster than the force propagation speed, it can pierce otherwise solid materials with minimal force expenditure - only that required to break the immediately adjacent bonds. This is the basis of penetrator missile/arrows/railgun projectiles - they can easily pierce inch-thick steel rendering armor impractical. Doing this *requires* speed. The propagation properties of both objects matter: consider doing a karate maneuver to break a board held by a helper, either held rigidly or held loosely, with an identical punch used to break the board. If held loosely, the board accelerates and hits the helper in the face, and for further insult doesn't even break. If held rigidly, the energy from the punch instead goes into breaking the board and the helper remains safe. Same idea applies when holding a gun with recoil - a rigid hold gives much more control and less kick-back than a loose hold. Same applies with car seatbelts - they attach the passenger to the seat relatively rigidly which gives a gentler deceleration because the passenger stays coupled to the large mass/inertia of the car.

It would be useful here to describe the origin of moment - the possibility of gearing or leverage, on a more elementary level. Of course, conservation of energy means that with 2*pi*r a longer r must mean a lower F because v scales with r. But this skips over some interesting mechanics - specifically, just how does a rod "know" to increase force transmitted as a function of r (without actually doing virtual work)? Consider a lever that we can interpret in a static situation as a case of beam loading:

 o----.-----------.			 .----.-----------.			 .-----------.----.----.-----------.		
      v           ^ F		 ^    v           ^ F		 v F         ^         v           ^ F	
 Lever, fulcrum at o		 Static beam loading		    Symmetric static beam loading	
The goal is to understand what allows the beam to hold its shape, because this is crucial to using it as a lever/fulcrum. Strings can be used for leverage:
______	______
 |  |    \  / 
 |  | F->/  \<-F
 |  |    ----
 ----    |..|
 |..|    ----

The force F used to squeeze the strings together is less than the weight W of the box, but nonetheless the box can be lifted. This is because the rest of W is handled by string internal stretching, and is similar to what happens inside a beam with a moment load.

Consider a string on a pulley: the force is constant all along the string
| o |                           /                                   |
 \-/--------------------> String pulled with force F ---------> Still force F
  - Tangent line to pulley    /                                     |

F is constant along the string, but the angle of F vs tangent line of pulley at that point starts out at 0 degrees at the pulley and approaches 90 degrees at large r, meaning F cannot oppose circular motion at large r. If r is forced to be constant, pulling the string in a circular motion at r means winding the string around the pulley by a constant pulley circumference over a much longer circumference at r, so already some gearing must occur. The force per unit distance at r thus must be less, to conserve energy. This is achieved by the near-90deg angle between F and tangent at r (the remainder of F, directed towards the rotation origin, is a force component that exists but does no work since r is forced constant - this contributes to rigid beam bending when a moment is applied).

How can we force r to be constant? Perhaps a set of grooves surrounding the pulley. But also by pushing the pulley: consider a 'rigid column', ie a string that pushes (but not quite a beam, ie no bending):
 /-\---------------< Column pushed with force F <---------- Still force F
| o |
Same as the string, the angle between F and tangent line at r causes a gearing effect. Once again we must keep r constant, and we can do this by combining the string and the column:
 /-\---Push-<---\\\\\\\\\\\\\\\\              ^
| o |                           ==============^ Fnet
 \-/---Pull->---////////////////  < Fbend >   ^ 

This creates an effective 'rigid beam' or lever, such that the point at the end can be moved at constant r to achieve a mechanical gearing/force amplification. As r->inf, the whole of F must be handled by bending forces, so beam deflection increases with r even at constant F. So, the gearing of a lever is essentially an angle effect, such that the applied force is only working against a small angular component of the amplified force, the rest of it going into beam bending and doing no work because r is kept constant by the rigidity of the beam. A similar angle gearing is used with a wedge or screw threads, so the force applied is at an angle to the force that 'does work':

     |\ ^ Fup
F -> | \		F < Fup, remainder of Fup goes into Fdown
        v Fdown

So, this angular gearing underlies pulleys, squeezed strings, and wedges. It seems that the underlying physics require that F=F1*sin(theta) + F2*cos(theta) be instantaneously and reversibly true, ie this is a property of force itself (and perhaps space) that any such angular split is equivalent and specifying either F1 or F2 with known theta requires the other one to be a specific value. This can be interpreted in terms of potential-energy wells as projections along different dimensions (a "microscopic wedge"):

    /\    //
   /  -__-/
  /      /   Particle o in a 2D potential that is slanted
 /      /    <--- W    Force W moves the potential towards the left
/    o /          F    Force F moves the potential towards the bottom
\     /           |
 -___-            v
Now project the particle's energy vs displacement along direction of W vs along direction of F:
E|    /				E|          -
 |   -               |        --
 |  -                |      --
 | -                 |   ---
 |-___________       |---_________
       x_W                 x_F

Since energy is conserved, E(x_W) = E(x_F) thus x_W < x_F and dE/dx_W > dE/dx_F and thus W>F. The angular leverage is thus allowed by wedge-like shifting of energy potentials along 'soft' projections while the load sees 'stiff' projections. In other words, looking along x_W which is the way the object to which an amplified force is applied see the wedge/lever/mechanism, any small deviation from a desired location requires very large changes in energy. At the same time, looking along x_F which is the way the operator of the mechanism sees the situation, any small deviation from a desired location requires only small changes in energy - because, as shown above, the mechanism is designed such that it generates internal (structural) forces that are determined by the operator but that interact only with the load.

I mentioned a while earlier the notion of upconversion processes (amplifiers, valves, transistors) which seem to hold a special role in their essence for operation of any machines. Here I will take a closer look at just what these processes accomplish, and what gives the illusion of their one-way (isolated) nature even when underlying physics are always 2-way exchanges. We may more appropriately call such processes "directed" as opposed to one-way or controlled, as will be made clearer below. All measurements have at some point a coupling that permits the quantity of interest to exit the system boundary, and all such couplings are inherently 2-way and thus are designed to be physically small compared to system size (so outside influences can have at most a tiny impact on system energetics - and on *system evolution*, contributing to the one-way notion of measurement which is strictly not true). In order for us to make use of this information coupling without affecting the initial system (as that would mess up the very measurement we are trying to make), an entropy-generating amplifier type (upconversion) device is used, which may be as easy as looking at a dial gauge illuminated by a light source (where the entropy is generated). This coupling seems to be quite different from the first type, as here there is an entropy change and it is decidedly not a conventional 2-way exchange: the amount of light falling on a gauge really has no effect on the measurement even in a theoretical sense (if light is assumed to be isotropic - usually it won't be in which case there *is* a coupling but it is very small indeed [f1]). What is the difference and is it a foundational one?

Consider a pressure-actuated valve:
|----|    --
| P |====/o/  <--->
|----|   --

The pressure P in the tank moves a plate with a hole o in it left and right, and this hole cuts/allows flow through a pipe (like the early valve example). This is a general amplifier model that can be applied to other upconversion processes. At first glance this is one-way: the position of the valve affects the flow, and the flow doesn't affect the position of the valve; the valve could be said to control the flow. In reality there will be a finite coupling between flow and valve, at least from Venturi effect causing a lower pressure at the small-area aperture {Venturi effect can be interpreted as a directionality of pressure in a moving particle} thus a faster/slower flow will exert some small force on the valve position. But with more clever valve and flow design it seems we can take the ratio of output force to valve force to be effectively infinite, coupling information from one valve to many particles. In a real sense the valve doesn't care about the flow at all: the portions where it blocks flow exhibit 100% information exchange with the valve material (elastic collisions) and the portions where it opens the flow exhibit 0% information exchange with the valve (valve doesn't know whether there even is any flow, it simply doesn't participate in exchange). In both cases no energy is transferred by the valve, so it may be more logical to argue that entropy rise is a function of the flow, not of the valve. In this way, paradoxically, the flow "doesn't care" about the valve either! {*we* care, because we can tell the difference between *potential alternatives*, what is and what could have otherwise been. But the flow just evolves according to the system it sees, valve or no valve.} The valve is not an effective measurement device because it doesn't actually interact with the flow, going from 100% to 0% exchange both of which are energy-preserving (and entropy-preserving, and information-preserving ultimately) - this is eerily like a quantum "black box" measurement where carrying out a "measurement" but then immediately carrying out an operation which "destroys" the classical information of the measurement makes it impossible to get information out, restores the quantum particle to its initial state, negating simplistic "wave collapse" interpretations {if the valve *did* care about the flow, it would no longer be an effective amplifier, sapping information/entropy from the output and making it affect the input - resulting in a partial information exchange.}. But then we must conclude the flow doesn't care about the valve - indeed if we equalize pressure on both sides of the valve there would be no flow and no influence of the valve. What the flow cares about is the presence of a pressure gradient in the system, which the valve can 'show or hide' depending on valve status; further evolution in the flow (turbulence, throttling) which are energy-dissipative depend on the pressure difference and flow media properties, not on the valve. So, what the valve does is "set a course" for time evolution of the connected system. Two systems starting in nearby states and made to evolve along slightly different courses will over time become arbitrarely far apart (like chaos theory), and this is what gives an amplifier its resolving power [f2]. The one-way appearance comes from the irreversibility of energy dissipation of some part of the system, the amplifier/valve determining the path of dissipation can do so with only minimal energy expenditure, and we interpret the evolution of the system as "information" by comparing it to *potential alternatives* which don't actually exist but could exist according to our understanding of reality. The same way, a rudder on a boat that is locked in place at a certain angle sets the boat's travel yet the boat's travel doesn't affect the rudder. The rudder is not energy-dissipative as it is locked in place; what it does is set the boat on a given trajectory that is also affected by boat motion (engine)/wind/currents. Over time the effects of different rudder angle become evident, given that there is energy dissipation (ie boat not standing still). Forcing a boat along a specific course does not change the rudder position (such forcing is, rather, made easier or harder based on rudder position), while letting the boat travel with a given rudder position does change the course that the boat takes. What is achieved is not any energy nor entropy difference but a difference in what, of the countless ways for a system to evolve, actually takes place. It is like a rotating arrow road post, where travelers come along and follow the arrow direction they see. Rotating the arrow takes minimal effort, its impact is only meaningful if travelers heed it, and its influence in due time can be immense as it determines how travelers use/apply/direct their energy on their journey. Thus even if two travelers expend the same energy and travel the same distance, they may find themselves very far apart unless they were initially given a similar *direction* by the arrow. {Thus, "directed" processes is an appropriate term.} This is indeed the essence of evolution [f3] - systems interacting and changing in time. It seems that for evolution to take place, there must be energy flow (ie entropy difference), thus in a sense entropy along with temperature *define* the notion of time for complex systems, and equilibrium systems are timeless/frozen.

I will try to formulate this concept but it's a complex one that I don't fully have a handle on. Consider the situation of being locked out: I take the car key when leaving a car, and upon returning I can use the key to open the door. But if I don't have the key to the car, I have no way to get inside, but I need to be inside to get the key. So the progression Inside->Outside->Inside is allowed, but Outside->Inside->Outside is *not*. It is a strict physical disallowance, an infinite barrier, there is no way around it, and this made me think it could be a type of infinite coupling. So far I've used the example of a valve, where I believe the orthogonal dimensions are key, and the above example shows also the relevance of time evolution. Can the types of physical couplings be enumerated? I will first try to better understand the valve in a general sense. Looking at it as force loops, we have two:

 Flow ->  o

One loop handles the force exerted by the flow on the valve: this force enteres the valve structure and the piping it connects to eventually closing back on the source of the flow (pump?). The other loop handles the forces required to adjust the valve or keep it in place: this force also enters the valve structure and the loop is closed through any external controller of the valve. The two loops intersect in the valve at orthogonal angles so they do not couple or exchange information. The intersection is allowed because the valve atom has a well-defined (x,y,z) location but the forces on the atom can be independently balanced among 3 orthogonal dimensions: as long as F_x+=F_x- and F_y+=F_y- and F_z+=F_z- all at (x,y,z) there is no requirement for F_x=F_y=F_z, all these are independent. Despite all this there is an inherent asymmetry: dz=>dF_x but dF_x=/=>dz, where F_x is the force on the flow and z is the valve position and => means affects. In other words, changes in the control loop lead to a different path of evolution for the controlled loop, but changes in the controlled loop do *not* lead to a different path of evolution in the control loop (unless back-coupled). Why is that? What causes the asymmetry? The first requirement is that the controlled loop has a path of evolution that is affected by the valve. The second is that it cannot affect the valve's influence on its own evolution. This is the "infinite" of infinite coupling. How is it achieved? We are dealing with *potential* differences, at the particle level there is just information exchange and no notion of a larger system. The controlled loop can be altered between many potential paths of evolution by the valve. The controlling loop on the other hand has only one path of evolution which then cannot be altered by the valve. It may be important that the controlled side transfers energy through the valve while the controlling side does not, so a one-time change by controlling leads to a long-time influence on controlled, which could be considered a temporal coupling in its own right. Yet I can think of "one-shot" valves where the influence is also temporary, and still the asymmetry remains that one side affects another while not vice versa. What distinguishes controller from controlled, or to decrease confusion, primary from secondary sides of the valve? The primary side affects the secondary's evolution, but the secondary does not affect the primary's. Why? Consider the degrees of freedom: the ability to change a basic parameter. The parameters are defined by fields: we have 3D position and force, where the force drives the position; then we have 3D e-field and charge, and similar for magnetism, and similar for strong/weak fields as such. All of these are orthogonal and thus any pairing could be used as the "plunger" portion of the valve. A degree of freedom is the ability to change a parameter by the associated force: this is a direct coupling and inherently transfers energy between spatial extents. So the valve is set up such that the secondary flow does not have freedom in z - it cannot achieve necessary F_z to move z of the valve, so to it the valve is just another stationary/static feature just like the pipe walls which also constrain the y degree of freedom - F_y cannot affect y due to the strong pipe. The remaining degree of freedom is x, where indeed F_x can affect x if the valve is open. Note that no degree of freedom does not mean that (say) F_y=0, the particle-particle interactions in the pipe lead to F_x=F_y=F_z and a notion of pressure, but there is no freedom in y because it is bounded by a strong potential, preventing energy flow/dissipation in that direction. This is more general than just saying x and z are orthogonal - in a real valve the secondary flow will have a real F_z but the valve is designed so its effect is negligible in terms of actually moving z - consider pneumatic valves in power plants/pressure amplifier, these use very large diaphragms to generate high forces so the secondary F_z cannot compete - this still functions as a valve though not physically orthogonal. From the primary side, the same z parameter is easily varied by a direct application of F_z, so there is a z degree of freedom. In the primary, changing z does not change any degrees of freedom (it could, with a "locking" valve), while in the secondary, changing z changes the x degree of freedom, and the secondary does not have z freedom to change z by itself. As before, projection of a potential along primary and secondary directions will show a steep slope for secondary and shallow slope for primary, and this is the cause of the asymmetry. In an ideal amplifier the slope becomes infinitely steep on the secondary and completely flat on the primary, and this gives rise to the term "infinite coupling".

Consider an electromagnetic relay: the m-field pulls the contacts together and when they are together the e-field can couple. Once again an asymmetry: the m-field influences the e-field coupling but the e-field coupling does not influence the m-field. The secondary (e-field) flow here does not have the ability to create a F_z to move the contacts together but it is affected by z whether it is coupled or not. If I write e-field = f(z) this is an artificial equality: the equal sign is really a time-based influence and is *not* bidirectional here because it does not represent information exchange as the two sides are not information-compatible. If I think of a potential barrier as a function of z:

 z=0   z=1   z=2   z=3    z=4..
					/\     ||
_____ __-__ _/-\_ _/  \_ _/  \_

So eventually there is an "infinite" steepness potential from the e-field point of view (though at z=1 the potential is not infinite thus there is a bidirectional coupling, however slight, between e and m fields at that moment. Maybe this fleeting coupling is behind conscious qualia?). E-field cannot exert an F_z because these are independent quantities - e-field controls charge transfer not location, to couple the two you could use an electron/charged particle - by charging the contacts indeed the e-field will lead to F_z but it will be much below what is necessary to move z so once again the z degree of freedom is limited. Whereas the m-field is set up to exert a large F_z so it has that degree of freedom, and thus constitutes a primary flow. The root of the asymmetry is that you can be affected by parameters that you don't have the ability to change, that are "constants" and "unalterable" from your view but variable and alterable from another view (that of the primary flow). Degrees of freedom will be indicative of the basic structure of information and are useful to fully enumerate it in that regard.

With the relay, I can connect the primary and secondary flows. With the primary, I know current <-> m-field -> z. With the secondary, z -> e-field <-> current. So coupling the two I have current <-> m-field -> z -> e-field <-> current...
Now it is a macro-scale evolving system, a loop coupled to itself which thus creates quantized states and the ability to use them as memory. How fast will this system evolve? Depends on size+light speed, this system is an elementary computer unit/oscillator. Consider a valve that is operated by air pressure:
----|  2 |
|1 |====| <--->
----|    |

Here the valve's F_z degree of freedom is fully open to the system itself. When the pressure in chamber 1 is higher than chamber 2, the plunger will be pushed to the right so that chamber 2 is sealed. When the pressure in chamber 1 is lower than chamber 2, the plunger will be pushed to the left so that chamber 2 is emptied. In equilibrium the system is not capable of exerting G_z due to balancing forces, though a statistical fluke could in theory lead this to self-actuate. When P1=/=P2, the system is able to exert F_z and thus exemplifies both primary and secondary characteristics: it is pushing the plunger on one hand, and on the other hand the motion of the plunger changes the pressure that is pushing it, by opening/closing flow paths. I don't know if this should even be considered a valve, but mechanisms like this (and say toilet flush/refill valves) should be handled by the theory. Needle valves are another example that dimensional orthogonality as such is not necessary but degree-of-freedom differences are necessary.

When walking by the road outside, I realized the importance of rules of the road - I was walking on the sidewalk, but people driving would stop at red lights and crosswalks, and not cross the line diving the road into lanes. It is the latter I will consider - some lines on the road physically don't mean anything. They cannot stop a powerful vehicle from crossing them. But now set up a system with a human in a high amplification/control position of the vehicle evolution, and the human rules of the road, and now the lines can effectively keep the vehicle contained. In a long drive there are many turns and corrections required to safely reach a destination, and with the lines on the road the vehicle will stay in the correct lanes, because of the human (or AI) driver. It is in fact tempting to claim that this act requires consciousness - an attachment of a system's evolution on an arbitrary set of symbols. The very notion of symbols and ideas/thoughts/instructions is a defining characteristic of consciousness. In the absence of humans, the letters in a book are just arbitrary arrangements of atoms, that don't mean or do anything beyond elementary near-neighbor physical interactions. But with a human that has been taught to read, these letters take on the greater meaning of symbols - they are now coupled to potentially abstract and definitely arbitrary concepts, and can lead to changes in real physical actions. This coupling may well be the role of consciousness. For example I can read a recipe, and then take real physical steps, altering real system evolution, with an impetus being nothing but some atoms on a piece of paper arranged to make a specific visual pattern. So, through my conscious thoughts that pattern is physically coupled/entangled to real system evolution. Seeing a map of a new place, I make a decision to drive to some interesting location, a significant change incurred by only a few bits of 'information' on a computer screen, coupled with all my previous life experiences/memories in subconscious processing. Then what to make of physics, because such entanglement is a real physical phenomenon, since we are physical beings? The high school physics set up a space that has a single time variable, and where the focus is on objects acting and interacting (much in line with our inherent actor-action constructions - see chapters on self). But what do se observe? We can only observe stable systems - by definition after a long enough time only the stable states remain. The universe then is exploring all possible systems and eventually the most probable (entropic) will remain. But because there are so many possible universe states, the evolution is slow enough that we have a sense of time progression. Of course, I will claim again that time progression is relative to the system experiencing it (speed of consciousness). Everything is dynamic, constantly changing and exploring new possibilities, searching for the most locally stable state, which is what we observe as it exists for the longest fraction of time duration and all our senses and machines perform at some point a time-averaging measurement. This is readily seen in action with humidity and water: water that appears as steady-state drops eventually migrates to more preferable vapor or condenses on cold surfaces, migrating towards lower vapor pressure regions. Invisible to the eye, the water always forms a vapor that 'explores' the environment nearby and migrates to a more preferred entropically or energetically state - but ultimately both must be "two sides of the same coin" as they determine what state is reached later on. For whatever reason, the universe makes it more probable for particles to explore nearby regions than far-away regions, and perhaps this very requirement is what slows down universal evolution so we don't reach the global minimum too quickly/efficiently. So some things - changes of the system - can be allowed or not allowed to happen, by blocking the ability of a system to explore low-energy states readily, always done by isolating/surrounding the mechanism of low energy transfer (with a spherical 4*pi boundary in 3D space). This goes right down to the quantum level - isolating is done by making energy exchange quantum-prohibited at some level. So, when I have a motor connected to a battery, there is nothing to stop the energy transfer from battery chemical energy to motor rotational energy, until the motor back-EMF matches that of the battery. As it were, energy wants to be shared equally between motor and battery (and the vast universal heat bath), just like the example of molecules on one half of a gas container eventually spreading to exist in the whole container entropically once a barrier to spreading is removed, and more fundamentally expressed as the equipartition theory. If I put a transistor between the battery and motor, I isolate the ability of energy exchange/stable state search, because a few atom layers in a transistor inhibit/make unallowed the necessary ways in which electrons would explore the big-scale system. The more energetic (high-voltage) the electrons, the more difficult it is to block such exploration. Turning on the transistor, such exchanges are now allowed and the system can perform a wider search for a low-energy state, which starts the motor turning, and eventually transfers all energy into the most stable version - heat - such 'inefficiencies' being best avoided by disallowing heat-generating processes such as friction and ohmic heating - using good bearings and low resistance wires for instance. These components in turn play the role of conductor/isolator, keeping the energy of the battery/motor from exploring the wider space which includes the possibility of heat. Applying a mechanical brake to the motor now explicitly allows such exploration, and energy will readily convert to the high-probability heat states at the brake pad interface, slowing down the motor, since there are tremendously more (entropic) heat/momentum distributions at the atomic level than there can be distributions in a macroscopic battery+motor (effectively 2-element vs infinite-element) system. But the brake pad can only get so hot - what if I externally heat it to very hot temp and then apply brakes? Perhaps what will happen is a transfer of energy into the motor - a laser-like phenomenon where a high number of high-energy states are exposed to the possibility of transfer into a lower-energy state do so in a coherent manner - the coherence in laser being in-phase light while in the brake pad coherence would be accelerating the motor in the same direction it is already spinning - a negative stiffness spring as it were. Because of the huge number of heat states, though, perhaps such a "coherent motor" could only be feasible on a micro-scale and with carefully controlled heat interactions. Still the laser and magnetron show that it is possible to transfer incoherent (DC) energy into coherent (in-phase) energy - and to do so efficiently, which sounds like an entropy decrease but is allowed because at some level the DC energy will be drained to the point where reverse conversion is impossible.

What of fusion? While energetically the states very much prefer the He arrangement over 2H, which lets fusion take place, entropically there is a joining instead of the fission splitting, so this is more difficult to achieve as only a few states are low-energy as opposed to many possibilities in fission. In fission, the single atom splits into energetic (fast moving/vibrations) components which can go anywhere in 2*pi space and then through collisions with other atoms transfer their concentrated energy into well-spread heat energy; all along this trail the energy spreads out and out in space, reaching increasingly numerous and high-probability states, meaning the energy "prefers" to be turned into heat as there are many more ways for it to be heat than for it to be concentrated into bits of atomic fragments motion which is still more than for it to be concentrated inside an individual nucleus. In fusion, an energetic neutron is emitted which could be converted to energy, but nominally there is no clear way for the unified nucleus to give up its energy - and energy does not flow up a concentration gradient. Beyond the problem of extracting energy after reaction, there is a need to reach the reaction state in the first place. There are natural barriers that keep this reaction from happening - the electric repulsion of the outer shells of atoms keeps the nuclei from exploring that low-energy state. And even once a plasma is used to force the atoms to strike each other and react, two nuclei combining into one implies energy concentration. This is entropically forbidden so such a combined nucleus will readily split back into its individual nuclei, yielding no net energy unless a device is designed to extract energy out of the combined nucleus immediately, minimizing time that energy needs to be concentrated. What fusion creates is not just an energetic nucleus - but a nucleus energetic enough to split right back into its constituents. All steps along a chemical reaction take place by finding most probable states - one is no "better" than another, so a combined product could well revert to its original source - after all it has exactly the right energy and momentum to do so, unless that energy is quickly converted into a less-concentrated form which "locks in" the reaction products and sets the direction of the reaction. This is what gives the illusion of one-way information transfer, and why amplification to achieve this requires energy dissipation. A reaction could be run in either direction, theoretically, with proper control of energy flows and what state is made more probable. Conversion of energy into radiated forms happens in both fusion and fission - the lack of a pathway for an energetic nucleus to convert its energy into heat forces the nucleus to split its energy (as it now becomes energetic enough to break out of its own nuclear-force barriers) into radioactive bits and pieces, and only those are then allowed to turn into heat that we desire. By not capturing the energy at the point of nuclear splitting (very high T) we lose significant efficiency compared to what we could have gotten, and we also get radiation damage, but this difference guarantees that any reactions that do happen will be indistinguishably one-way. The atom doesn't know any better that we want heat and not messy radiation - we just dump energy into it until it breaks and then hope that eventually friction will play its role - it works but is not elegant. Like over-charging a battery until it catches fire, then using the fire as heat energy - it works but we could just discharge the battery directly if we knew any better.

As I argued earlier, energy tends to readily take on states that are spread out - not due to some "want" but simply by probability since if state changes are allowed the most numerous ones will be most likely to be occupied (the premise of entropy). But how do we make use of this in practice - in real power plants? Consider the case of water steam passed through a turbine, accelerating it and losing pressure and temperature in the process. On a big level, the heat energy is allowed to spread - from a little bit of steam to a lot of cooling water and then eventually into the atmosphere and space as a photon field (ultimate sink). But somewhere along the line, we are able to limit/control the spread, and thus force the energy to also take on rotational+electronic forms, which themselves have a much longer path to their ultimate thermal spreading. Why can't all the energy be made into electricity? Why can any of it be made into electricity? In the absence of a turbine, the energy can spread by direct thermal contact, the spreading will thus be more effective - the turbine poses a barrier to spreading and thus the rate of energy transfer from hot to cold is reduced. A closed valve would also block the spreading completely - so we have the case of no turbine=unlimited spreading=no electricity generated; turbine=limited spreading=electricity generated; closed valve=no spreading=no electricity generated.

---	---	---
 _	 \	 |	as in variable-pitch turbine blades, from left to right - fully open, turbine active, turbine closed
---	---	---
Within the turbine itself there is an asymmetry - looking along a line from high to low pressure flow, the turbine blade collision points are constantly moving away from the gas molecule so a collision will tend to slow down the molecule towards zero in the lab frame. Looking from low to high pressure, the points are moving towards the gas molecule, so collision will tend to accelerate the molecule towards infinity going into the low-pressure area.
 ----^---^---^----     -----------------
H ...\...\...\... L = H    .-> .-> .->  L
 ----^---^---^----     -----------------

This is curious: the molecules prefer to go towards the area where they move slowly, so there is a time effect. Time stability is the only definite criterion: things that stay constant for a long time are more stable, and more numerous in time just like spatially spread states are numerous in space. Thus these slow-moving states are temporally more likely to be encountered, and we can say that particles will automatically 'slow down' to a common speed but will not spontaneously speed up relative to each other. I argued that this mechanism is also at work in general relativity gravity: looking at an absolute clock, particles outside a gravitational potential have a faster clock rate, thus for 1 second of the particle there are x absolute clock ticks outside the potential and y inside the potential such that y>x, or the deep-potential state is more numerous in time, ie exists for a longer absolute time. This is in accord with the idea of "forward time travel" - an astronaut next to a black hole can watch the external universe be accelerated in evolution with regard to him (as dramatized in the film Gravity). This makes gravity attractive - objects oscillate closer and farther but the time warp biases this so the appearance is that more time is spent closer (and closer and closer, so independent of velocity). This makes ensemble states also attractive - molecules will tend to enter a distribution wherein they can be just as slow as the other ones (while conserving momentum). So then what about a fan - it speeds up air, creating a high-energy state that is not temporally stable? The fan can only do this because it is driven by energy - electrical energy in this case - which reaches a temporally stable (slow) state in the process of driving the fan (the electron oscillations in the AC motor are slowed down by the loaded fan). This process works out such that the slow-down of the electric field exceeds the speed-up of the air "field", so the net effect is with time forward evolution the air molecules speed up while a larger quantity of e-field slows down, so overall energy still spreads temporally and spatially. Making the motor weaker, such that it doesn't slow down the e-field as much, also makes the fan slower so it can't speed up the air as much. A perfectly conserving coupling, that has net spread of zero, is temporally indifferent - it does not have a preference to operate one way or another in time-forward evolution. To get a desired motion, a coupling must allow a net nonzero spread, and the nature of this spread will determine the direction in which the machine/device operates. An efficient machine manages to operate with minimal spread overhead, yet we see that larger spreading means more "motive force" to drive the reaction/machine - to the limit of getting the machine's highest possible output power when 50% of power is used in transmission/delivery of the energy states (consider the most power I can get out of an electrical plug: this will be achieved with a resistor R = measured resistance of the plug terminal-to-terminal, perfect impedance matching).

Information is an ultimate conserved quantity in space. Mass can change into energy, but information cannot change into anything, it is unalterable and defines the content of space. Space follows the law of information in - information out = information stored locally/bounded. Space and fields are continuous, and information in space must spread. Particles are self-binding information, continuously circling around itself and thus staying spatially localized. Given some information in a volume, over time what leaks out is called unstable and what stays inside is called stable. Thus field oscillations are unstable and particles are stable, in space and time. The conditions for a stable bound information state are strict+quantized, so particle states are quantized. Any aspect of a particle that does not satisfy the quantization is by definition unstable and in time will spread out from the particle, leaving behind only the stable "quantized" states to observe experimentally. Particles are unstable when transitioning between quantized states, and in this time emit information (or absorb it) until a stable (quantized) state is reached, not because they seek or favor a stable state, but because the definition of stable is what's left behind after everything unstable has been radiated/spread out. Given enough time, the only remaining information has to be stable thus quantized (or zero and spread out). We are experiencing the slow conversion of mass to energy, or stable to unstable information, which happens because probabilistically all changes into an unstable state are irreversible as the information spreads into the cold cosmos - unbound/unstable information can explore more and more spatial configurations, so its return to the original bound configuration becomes highly unlikely to the point that we mostly ignore it. The spreading of information is the life force that drives all things. Consider the earlier example of a power plant turbine: it generates electricity by expanding steam in a controlled way. Why should it be possible to get the spreading of steam energy to allow a spreading of electric energy? Information is always conserved so I argue the total spreading is always the same. The turbine converts potential spreading of steam energy into spreading of electric energy. If I remove the turbine, the steam will expand freely, creating great turbulence and very complex sets of molecular interactions, the resulting swirls and structures being a sign of the life force. With a turbine, some of this life force is siphone off from the steam turbulence world, the expansion becomes more orderly, the complex swirls and turbulent structures disappear, but in turn reappear in the electrical grid. My computer chip has nanoscale features which precisely control what sort of interactions are allowed and not allowed, so instead of the free-space turbulent swirls of steam molecules, I control the swirls of the electrons at a nanoscale level to do something useful for me. But the overall complexity and swirly nature, overall life, is conserved. Either I get steam swirls, or a CPU computation, but not both. The CPU computation being more "useful" is the result of me knowing at the nanoscale what paths are available for the swirls (and so, Maxwell's Demon has no problem with this), while the free-space steam swirls mean little to me. But who's to say they're not useful, or not "alive" - since they dissipate energy like all living beings? In fact, there is an unimaginable amount of information dissipated all around me in "not useful" forms like heat, so easily invisible and not considered as information at all. But it is there - when I switch on an electric heater, the atoms in the heating element experience very complex interaction timelines, each phonon and electron interaction having its own specific impact just at that time and place, information that I don't know or use but is still there and still tremendous. If I designed the heater at an atomic scale to be special like a CPU, I could perform impressive calculations with all this information spreading, but I only care about temperature and not the precise way it is reached, so I ignore/never consider just how much processing that heater is doing - because the processing is on a problem I don't know and towards a solution I won't understand. And the processing taking place in the heater represents processing which was prevented from taking place in the steam as it passed through the moving turbine - which pre-emptively slowed down molecules and sped up electrons instead. Perhaps information spreading is what creates qualia, then this is a world of feelings and experiences, not abstract physics. Then my consciousness is created by very carefully controlled information spreading from chemical/molecular-scale stable information stores along molecular/cell-scale pathways controlling the nature of the "swirls" to solve for an optimal way for me to continue existing/surviving. The consciousness is spatially bounded by infinite couplings and temporally bounded by temporal infinite couplings (ie emergence and decay moment-to-moment controlled by an external infinite-coupled system, like a crystal oscillator for a CPU clock), which has allowed evolution to fine-tune the qualia of the system (qualia = paths of energy-spreading) such that the most entropically probable state that will be reached by the spreading of information leads to the survival of the organism. The spreading of energy along these paths leads to my real qualia and experiences moment to moment. We are surrounded by other systems, consciousness and qualia tremendously more complex but not necessarily self-aware or acting in concern for its survival: turbulent flows, storms, sunspots, heaters and boilers. As a kid I had the intuition that moving things are alive: trucks/cars, electronics, machines. The adults convinced me I was wrong, that only humans can feel pain, and only animal-like things are alive. Now I am led back to my more honest childhood appraisal and I feel it to be more and more correct, though with technical details now more defensible and specific (spreading of energy is required for life and qualia, stable objects do not experience qualia or time).

So can large systems be conscious? Is it legitimate to call organizations/governments "living things"? What do these living things experience as qualia, if they are living at all? Infinite couplings are key, as they allow *influence* without direct *interaction*. Schematically:
World ~~(photons, sounds)~~> {Sensory Organs (eyes, ears)} --> |infinite coupling| --> [My conscious experience+qualia, computation of optimal solutions, conscious actions/choices]
World <~~(outputs)~~ {Muscles, body regulation} <-- |infinite coupling| <---------------|   ^--[Thought experiments, talking to myself]--|  ^--[Memories so I know I am me]
Now consider an organization with two people affecting each other:
[ Person 1   ] --- || ------ [ Rest of world ] ----- || --- [ Person 2   ]
[ Conscious  ] --> || --> [    World that      ] --> || --> [ Conscious  ]
[ Experience ] <-- || <-- [ affects each other ] <-- || <-- [ Experience ]
These 2 people form an *interacting* system/loop that has infinite couplings to the rest of the world. So a consciousness could potentially be emergent in this bigger system: [f4]
[ Rest of world ] ---- || ---- { [Person 1] <--> [Person 2] }......... emergent conscious experience?

I still don't feel any of these emergent qualia because what I am is defined by my memories which are infinite-coupling bound to my experience only, but it would not surprise me if the emergent consciousness has unique qualia of its own and acts as a living being. After all I have the same experience with my brain cells: they are pretty complicated entities that interact with each other and as a result I experience a vivid set of qualia, yet I have no idea what it's like to be a brain cell. Could it be that I take on the role of a brain cell in a large organization's qualia experiences? The organization would also have vivid experiences, but no idea what it's like to be me. Then what I see and how I interpret the world is only a mental representation, which happens to provide useful (to me) clues about my surroundings but does not actually represent the surroundings. I see myself as sitting on a chair in 3D space surrounded by various objects, but the reality is I am surrounded by gas molecules and the photons I see are inside my eyes not "out in 3D space". My experience of colors is more dependent on evolution than on underlying physics - the white wall next to me is actually a very complicated spectrum arrangement of atoms, with no inherent qualia attached to it - it simply exists and responds to given experiments in a given way, the qualia of "white" or "hard" are all due to my brain structure and are not properties of the external world. It is very tempting to just see the world as exactly what my qualia tell me, but I know that it is tremendously more complex and rich, all of which goes mostly unnoticed and even actively ignored if I rely solely on my qualia experience. The sky is not "blue", it is our brains which are capable of generating "blue" qualia, and which evolution has somehow made associated with sensing a given wavelength by the eye. The description of physical reality must be in terms of numbers and physical relations, whereas the richness of qualia and our observations of reality rely on the complexity and nature of our senses - our brain as well as our scientific instruments (different instruments will "see" different results even on the "same" experiment).

Particles inside a given volume can emit a field that other particles outside the volume will feel. The typical view is of the field as a stable entity, but I define it as unstable and tending to zero, so maintaining a non-zero field over time requires continuous emission by the bound particle. This seems like it should not conserve energy, but it does - as we can only extract energy from field differences in space or in time, a non-changing field value is fully conservative. One could argue that since the 1/r^2 field is stable around a particle, it should also be considered a stable feature, indeed perhaps this field is a necessary requisite to even have the stable bound particle as it somehow helps with binding it. Still to reconcile this field with the energy spreading/dissipating field we usually experience, it is logical to interpret the 1/r^2 potential as a dynamic entity, just like particles themselves are dynamic even though they are stable. Information conservation is trickier to apply here, as the 1/r^2 potential is information - and if it is continuously spread and emitted, this would imply an "information source" at the particle, which I have claimed to be impossible by definition. Perhaps informaiton is not conserved in this way, but that makes me wonder why we can't just make free energy, or perhaps I need to better define what exactly constitutes information so that whatever maintains the 1/r^2 potential is not it.

[f1] or maybe if light were fully isotropic there would be no way to read the gauge so no information is coupled. This may be a deeper indication that the minimal extent of the 2-way exchange even in upconversion processes is based on the information transferred, giving a physical unit of information.

[f2] as before, there may well be a lower limit on resolution based on minimized couplings of information units, but this is small by all engineering measures - still cannot be neglected in this theory but I don't have good evidence to elaborate on this yet

[f3] computation, for one, is done by transistors connected to each other, every step a directed process.

[f4] it is fitting here to consider how the leaders of a group (ie CEO, general) also have an infinite coupling with their workers/subordinates: they can send out mass messages and orders, but workers cannot readily affect them in any way: just like the conscious brain is shielded from the rest of the organs by infinite couplings. This in fact defines the distinction of leader vs worker, conscious vs not.

«« 5. Information Theory    6. Information Couplings    7. Conscious Systems »»