Deterministic World

Part 1. Physics

2. Observers, Objects, and Existence

Scientific experiments - what is meant by experiments? I am not discussing word meanings/ontological concerns (what is art?) but rather a physical basis for what we can definitely call an experiment when seeing it as an independent observer. An experiment is designed to measure something, thus the concepts of experiment and measurement are closely linked, yet an experiment is not just a measurement. It is a series of specially chosen measurements designed to uncover an underlying pattern or rule, or match/refute theoretical findings. Individual measurements can enumerate different properties, which is useful and necessary, but an experiment is desigred to find some underlying rule or pattern by which the properties can be compactly explained. In an experiment it is never adequate to have a single measurement - because this makes it impossible to assign an uncertainty to the determined quantity. In an experiment it is also necessary to be able to control the outcome, to prove that changes are due to a proposed mechanism and not random. This interactive nature distinguishes experiment from observation. The goal of the experiment is to provide information that fits within the experimenter's framework of knowledge, so the measured variable must be clearly defined. An experiment is a set of measurements, designed to evaluate a specific aspect of physical processes in a way that can explain why these processes occur. In this set, the influences (variables believed to cause temporal changes) proposed by theory are varied over some range of interest in all their degrees of freedom while the outcomes (variables of interest to the examiner, at a time after influences have taken effect) are accurately measured as a function of the influences. Generally, we don't know what the influences for a certain outcome are, or even whether the outcome is a logical aspect to be evaluating the influence on. The only way to answer the latter is by appeal to theory and mental frameworks of how we explain the world, so experiments cannot be seen as independent of theory. Theory guides experiment in pointing out the things we can call "outcomes" and suggesting that we can control them via an earlier (temporal) influence. Once we have an idea of an outcome, we can choose among infinite potential influences on said outcome - how are we to continue? Even in misguided efforts, theory determines what influences we test as the potential scale is (again) infinite. The basic underpinning of this whole framework is causality: if a system is set up in a given way at time 0, it will evolve by time 1 to another given state that can be fully predicted from adequate measurements at time 0. If this is not true, we can never claim to observe any cause/effect relations in our experiments, only potential relations - but this gets tedious when science has been so successful when ignoring non-causal influences.

So, once we acknowledge causality and have an idea of an outcome and its influences, the experiment can take place. In an ideal experiment there is a 1:1 mapping of influence and outcome. The measurement process is set up as a low entropy system, which "naturally" increases in entropy and carries the measurement result to the "entropy sink" of the measuring device. So, the measurement itself is an 'automatic' physical process. The art of the experimental measurement is in ensuring the measurement gets only the desired information by consistently setting up the measurement process in a specific way. Thus the 1:1 mapping is specifically between measured variables and varied variables - one must not exceed the other. If there are too many measured variables, some may be affected by the non-varied variables and thus outside the predictive ability. If there are too many varied variables, it will be impossible to separate their effects, or in a better case, there will be unnecessary repetition. Practically, we don't always know enough about either the influence or the outcome to get a 1:1 mapping by design (in fact all of our "new" discoveries come from finding unexpected influences or outcomes and setting about making them into experimental variables - this is how we can now take it for granted that (say) radio waves exist). It took us thousands of years to accept that influences are of a limited and local nature - the temperature of water in a pot depends strongly on the temperature of the pot itself, less so on temperature of air above the water, less so on weather outside, and effectively independent of the moon phase or star alignment. So in the absence of knowledge of possible influences this is a good starting point - what is spatially and temporally 'close' to the outcome? Those things are potential influences. The rest of the world is not a potential influence. Drawing a sphere around an outcome and tracking what goes in/out will (along with full knowledge of what's inside the sphere) allow us to predict the outcome at time 1 as desired - because a sphere blocks off any spatial influences while the full knoledge of internal contents blocks off any temporal influences. Whether this sphere is explicitly recognized or not, it exists in all experiments beyond the boundary of the influences we can control reasonably, and ideally we would track all influences passing through the boundary but practically we assume that such influences do not exist (or find/set up a situation in which this is true). We must make this assumption because the only way to be sure of no external influences is to measure all possible (infinite) external influences - easier to do if we assume a boundary surface but still a heroic undertaking - and even if we do measure these influences, we have to apply the same strict standards to *those* measurements, and so on requiring eventually knowledge of the whole world. In a more pragmatic sense, the relation between local influences and outcomes is so clearly demonstrated by experiment and so elegantly explained by theory that we effectively take it as a given that with adequate effort all "external influences" can be eliminated from affecting the outcome. [f2]

With knolwedge of the outcome to measure and all potential influencing variables that affect it, experimental design can proceed. This knowledge is never absolute, imparting an element of luck - experimental success depends on finding the correct influences and outcomes, but such success vindicates the theories that led to the initial determination of the influences and outcomes - whether a hunch or a full-fledged academic work. The goal of the experiment is to 'prove' (in an ideal sense) that the influences control the outcome in a given way - without such a proof the experiment is just a series of measurements without a cogent framework to interpret them (what was kept constant/varied and why? this is already evidence of *some* theory at play). First it must prove that the influences tested are the only cause of the outcome (realistically 'only' changes to 'largest'). Rather than exploring the full space of all influence variables, real experiments place the vast majority of influences as 'control variables' which stay unchanging in all measurements. The distinction between control variables and non-influencing variables is that if control variables change, they will affect the outcome, so they must be kept explicitly constant, not just ignored. Then the small remaining influences are taken as 'independent variables' that are varied over a practical range as the outcome is measured. An effective experiment requires that it be possible to predict the outcome from the independent variable with certainty. Thus the independent variable must be varied and outcome measured multiple times. Practically though, we don't know/can't control all possible influences and to convince ourselves that the measured results are worthy of a theory the experiment should repeat the same independent variable multiple times and ideally show the same outcome - thus "proving" the effectiveness of the causal relation. Second, it must prove that the influences cause a given outcome in a specific manner and consistently. This requires the measurements to have enough descriptive power (taking the above notion of repeatability into account) to be able to distinguish between different influences. So, at the least, an experiment must have a notion of reproducibility, and the spread in 'same-initial-conditions' outcomes must be smaller than the difference in outcome due to an influencing variable variation (that is known/measured). This is a notion of *resolution*. To do better, we should take into account real factors - time, money, and desired knowledge. A good experiment will uncover the most knowledge per (time*money), or some such. Naively taking that knolwedge is positively related with number of influencing variables varied (as opposed to kept constant), we want an experiment that can control many variables and make quick measurements, for cheap, with sufficient resolution to make us confident we know what's going on. An advertisement for a hotel claims "it's good to not be home", playing on the usual saying "it's good to be home". Yet if it's good to not be home and it's good to be home, then it's always good, and the whole concept of good loses its meaning; the word is stripped of its descriptive and discriminatory power. The point of a measurement is to be specifically applicable and descriptive, thus it must be precisely defined and only have limited applicability (describing what it is, but also what it is *not*).

The information content of an experiment, its physical reality, is independent of the words/symbols we use to describe it; it just is. Using models helps us explain and find patterns, but models are always open to questioning and reinterpretation as long as they satisfy experimental information. The information we get also depends on what the experiment is designed (consciously or not) to measure. We make progress by finding common/singular points, like “voltage” doesn’t depend on type of battery or power supply – effects are same in any case.

Through the earlier examples of reversible mathematical operations and inability to change one real number to another without requiring a second one, I was led to a law of information exchange. Usually statements would not be called laws without significant prior verification, and in that sense my approach is reversed: I will call it a law due to its intellectual elegance and see whether it leads to reasonable physical predictions matching real experiments. As you might see by the end of the book, I think the name ends up being well justified. For now, I will apply the law to scientific experiments - at first glance an obvious example of unidirectional transfer of information from experiment to experimenter.

Applying the law of information exchange, I reason that all experiments that determine the information in a system must also perturb the system, and furthermore that the more information it is desired to collect, the more the system must be perturbed. Or, any experiment in which an observer gains information about a system (observes the system) will also give the system information about the observer. It is a common physics blunder to "imagine a universe with just two particles", because by just imagining this universe (as a human observer/creator) we have brought in our own biases: a notion of time, distance, discrete particles, and energy for instance. So we have already supplied significant information (including math itself) to this 'simple' model, and we will further supply information when requiring the particles to be at a particular separation to see how they evolve. So, in simulating this toy universe we learn how the particles move, but we had to supply the rules and initial conditions for those particles to move in the first place, so the toy universe also 'knows' about us and what we seek to calculate. We are not as much of a unidirectional observer as it seems at first, in fact becoming an integral part of the toy universe by solving its equations in our own universe and feeding the results back into the toy universe to cause its time evolution.

We can apply the same thoughts to the big picture of our universe: if there is a creator, can we observe it? Can the creator observe us? Consider a creator that knows everything about the universe, being able to observe it at will, unidirectionally. We get back no information about the creator, like standing in front of a 'one-way mirror' (the mirror is of course not truly one way but just seems like it to our eyes - I will maintain that all physical information transfer involves exchanges; the creator on the other hand may have a 'true' one way channel). The creator may use this one-way observation to make alterations to our universe, modify it at will. But as soon as it decides to modify our universe, bidirectional exchange will have occurred, we will be able to observe the modification, and systematically treat it as a law of physics (or their violations). In acting on our universe based on unidirectional observation, information is also transmitted to us about the creator and perhaps its universe, thus we end up with an information exchange.

What if this creator only observes, and takes no measures to modify/alter our universe? Then there is no physical effect of the observation, no reason for us to even think that observation took place. Crucially, even here information exchange ends up being bidirectional. Just as we found that in the system of equations A*x=y we supply the same information in y as we eventually get out in x, the creator which does not in any way alter the universe cannot get any new information out of the universe other than what it had originally put in. Only the format/the perspective/the interpretation of the underlying information changes as the universe evolves. So we still get information about the creator and its universe in the form of how our universe works and was created. The unidirectional transfer of information has been an illusion, as the creator has already supplied all the information that it may ever expect to observe if it avoids modifying our universe. As soon as it modifies our universe though, another bidirectional exchange of information will occur and we will learn more about the creator, just as it learns more about our universe. If the total amount of information in our universe is postulated to be conserved, then the law of information exchange must also apply to our interaction with a creator.

What does this mean for our scientific experiments? I work in a laboratory testing materials, observing them under a microscope, modifying them through mechanical work and irradiation. I am in a sense the 'creator' of the experiments that these materials undergo, I am all powerful in choosing a method of measurement and examining every feature of the sample, the sample is fully under my control. Or is it? For in the very act of picking a system/controlling variables/adjusting parameters between trials/carrying out multiple trials, I am providing information input (about myself and my goals) to the experiment. The experimenter does not observe a sample, but interacts with it. In setting up my experiments, I am 'asking' my samples for specific information, and the samples provide this information if 'asked' properly. If the samples had a brain they could try to think about why I am asking the things I am and how to reply appropriately, so there has been a bidirectional exchange. But if the exchange is truly bidirectional/symmetric, is there any way to make a distinction between experimenter and experiment? Who is 'in control' here - surely the human experimenter? The obvious view is that the experimenter is in charge, starting/ending the experiments, collecting and publishing the data. But information theory says that my experiment should affect me just as much as I may affect it. In a very real sense, it is the experiment/sample that is in charge of the experimenter, for the data it provides is what the experimenter will publish, and what will determine the experimenter's future course of action. So exchange of information makes it naive to think of any one object as being 'in charge' (even if that object is ourselves), the only absolute controlling certainty is the laws of physics.

At this point we must not go awry by ignoring the fact that all experiments that we can carry out will be carried out within the context of the larger "universe" system. Thus if we want to describe a system like the universe, or even the results of our real experiments, our model must be complex enough to include a universe in which experiments can be carried out. Clearly models we have are not nearly that complex. We must supply *significant* external information, including math itself, into the models about our universe to get relevant results. And we can only solve time-based problems by using real universe time (a formula on its own won't do anything). However we can learn from the fact that these *very simple* models work:

Nature has significant symmetries that are a property of the world (that coincidentally shows up in experiments done in this world but is present everywhere even when it is not recognized as an experiment).

In essence the symmetries in a system can be observed uniquely by inhabitants of said system! (suggested but not proven) This is similar to asking if "space is Euclidean or parabolic" - we live in it so our view is warped, but through some means we may be able to tell, such as math relations of measured quantities.

A tool widely used in classroom instruction (and thus passed on to students - who may not critically evaluate it) is that of the thought experiment. Of course this tool had its place in a number of important theoretical discoveries such as Newton's and Einstein's purported thought process leading eventually to their theories. However, we must be very careful in its application, since it inevitably introduces human biases and "intuitive" aspects which are questionable or even completely false. In the units of physics we can see this aspect of models and intuitive reasoning. There are immensely more atoms in the universe than the number of seconds since its supposed inception. This might imply that the universe is very big, but also that a second is a particularly large thing. Indeed, we see that atomic phenomena happen on the scale of femto or pico seconds, so if we talk about atoms we ought to compare to that time scale. But this is so far away from a human's range of perception that it is doubtful we can make any reasonable statement without using mathematics. The mathematics works because underlying symmetries were kept while most of the "intuitive" aspects were eliminated, hopefully. And yet, when we talk about atoms, we imagine them as concrete entities like apples, existing in time like objects we observe. All of this, which is how we interpret our theories (ie which of the two slots does the photon go through?), involves a carrying over of our experience of the world (in m, s, kg, and N) into other, vastly differing scales. This must not be assumed to be accurate.

Another flaw, or introduced external ssumption in theories, is the notion of an observer (a human one, at that). A typical statement in a physics classroom might be "imagine a universe with just these two particles..." and we like to think, intuitively, what the two particles will do. But this is very wrong - for by looking into this universe and tracking the particles we have brought our own biases (perhaps those of our universe) - a notion of time, distance, energy, temperature, even the discrete particles themselves. The mental model is not of two particles in space, but that of two particles being embedded within our universe and observed. I have no idea what a real two-particle universe would do, and if I had a way to make one, I could not observe it to find out, because the observation would change it.[f3]

As humans, we have natural access to "human-sized" objects, both in time and space. In the present day, we are able to conquer larger distances, speeds, short and long time scales, and see stars and microorganisms. We have become familiar with many scales outside our natural experience. However this was not always the case. At some point man did not have sharp blades, flat surfaces, or impermeable containers. Man did not have glass nor lenses nor batteries nor vacuum tubes. However, man was able to gradually conquer the different scales of experience required for these exotic objects. How is this possible? How can a sharp blade be made if only dull objects are available? There must exist physical processes, which might be called scale-converting, that have enabled us to access these scales. While we have created a large number with modern technology (like the SEM), there must have existed naturally a sufficient number of these processes to allow us to advance to the modern tech level. Such processes are of great importance and should be explored more thoroughly, as they act as a "gateway" or "impedance match" between the scales we experience and the scales we would like to control/understand. In this sense the process provides a physical symmetry that happens to straddle a range of scales. Note that the application of such processes is uniquely suited for individual beings (or perhaps vice versa), explaining why human intellevt can act as a real weapon.

What does an atom "feel"? Inspired by Lehar we seek to interpret the atom's actions with some semblance of consciousness given to the atom (since our own consciousness stems from an atomic system). While in a gas we see, for instance, molecules bouncing around, the atoms are very much less aware of this situation. An individual atom might feel its electrons shifting into stable patterns - something we can see is a consequence of atomic bonding. Ignoring this effect for now and assuming a monatomic gas, the atom might feel somewhat like a blindfolded man in the middle of a hockey field - pushed around at random. As it encounters other atoms, all it feels is some increase in force on one of its sides. It remains unaware of what it's interacting with, or what velocity it's traveling at - in our "entropy" discussion, the initial "fast" gas atom would not know that it is fast, nor would any of the atoms it interacts with. Furthermore the atom does not know when or from where a future interaction will take place, nor does it "consciously" exchange velocity information with others (since it does not "know" this) - that exchange happens only insofar as we, external observers, can assign velocity information to each atom. To the atoms themselves, such information transfer happens implicitly. In other words, in an expanding gas the atoms do not "seek out" a lower energy state - it just happens from their viewpoint that for a while they are hit more from one side than the other. The emergent effect of "entropy rise" is only from our, external, interpretation. In effect, the atom interacts like a "heavy photon". We note that in a particle-centered relative reference frame, the particle behaves as if it has infinite mass/impossible to accelerate, with other particles bouncing off of it. This is consistent with the particle not knowing its velocity, but on a basic level is just a consequence of forcing the particle to always be at the origin of the reference frame.

What is a bug's view of the world? My first thought would be that its world is messy - it cannot know the larger trends, events outside its control must happen randomly from its viewpoint, and it must find some way to respond to this constantly changing world. I might claim that the bug finds its world a complicated place - few clues about its operation and a wide variety of configurations. Yet this view is illogical - a bug is a simple organism and must find itself in a simpler, more symmetric world. If it used some machines to measure its world, it would just learn about the machine's world - making the conclusion that the two are actually the same requires quite a logical leap (yet it is experimentally supported (such as by effectiveness of medical procedures and predictions of machines about living things) and removes the questionable distinction between machine and creature). THe bug sees clear, logical patterns (produced by motion, flowers, scents) and responds in a clear, logical way. Its world is much simpler than a human's, even if we may conclude that the bug has no concept of the world and is completely lost; it is our ability to handle increased levels of imperfection and asymmetry that makes our brains more capable than a bug's. Even simpler must be the world of the electron - even though we require supercomputers and high level math to describe it - the electron can only sense and respond to the world in limited ways, all of which are prescribed fully (as evidenced by strict physical laws that we observe). Its world is one of the clear symmetries seen in TEM diffraction patterns. We make similar flaws of assumption even when interpreting other people's views of the world. When looking at operators in a control room, or aircraft pilots, or musicians, I project my view of the difficulty of finding a proper action and assume that their work must be difficult and thought-intensive. Yet once the initial barrier is overcome and trends in the controls are learned, in a real way the operators are in a much simpler and more logical situation - they know what switch to use and when - than what I portray them to be [f4]. This is not to detract from the difficulty of the associated work, but simply to emphasize that a rudimentary interpretation of complexity (and use of such in a physical theory) is likely to be flawed. Ultimately, complexity and the closely associated "degrees of freedom" [f5] are human constructs, so until a better physical definition can be found I will focus on the more readily defined notions of information (results of experiments/information exchange) and information content (how many experiments are needed to uniquely define the system experimented on).

What is an isolated system? The notion of isolation in a physical sense is circular, since a system can only be claimed to be isolated when it is seen to not interact in a certain way with other systems. This is a result of information exchange being blocked or limited, and a s a consequence the information content of the system is not changed and its information is maintained over time (that part of information which cannot be exchanged). For instance, an ideal bearing isolates the rotational motion of two bodies, and thus allows the motion of a rotating body to be maintained. Other exchange mechanisms like air drag eventually result in an equilibrated system but their ability to exchange information is slower than the solid bearing. A similar example of isolation is the use of roads and railways. A difficult trail with terrain and obstacles is physically reduced to a more standardized system, effectively isolating the vehicle's tires from exchanging vertical height information with rough or uneven surfaces. On a microscopic scale an atom can be isolated such as one held in a Penning trap, allowing the isolated one to be cooled down or heated up far from its neighboring atoms' temperature. I claim that a properly isolated atom in an excited state will not decay as usual (just like the bearing example - it still may decay through inadequate isolation eventually, but much slower than normally) because when it is isolated it has no means of 'transferring' its excited state elsewhere. Such a claim would apply to all excited states - radiative, kinetic/potential energy, or radioactivity. The KE/PE examples are readily seen at all scales. An associated question is when would an adequate degree of isolation be attained - we can only define an isolation method's effects by experimental observation, and the above claim does not guarantee that it should be possible to isolate a system sufficiently from all interactions. For instance, at present we have no known way of isolating a system from gravity (does rotational/orbit motion suffice?). An isolated system may still evolve in time (this is another thing which we can't seem to isolate - perhaps at low temperature?). Consider a battery-operated computer that runs a program that replaces existing data with external inputs. In isolation from external interactions, we have claimed that the information content does not change - yet the battery voltage drops, and the computer's "memory" is lost. Maintaining the hypothesis as logically plausible means this lost information is transferred elsewhere in the system. In the case of the battery voltage, if it is not reversibly transferred to capacitors/memory bank of the computer, it will result in heat. And if the data is irreversibly overwritten, more heat will result [Claude Shannon's claim in Information Theory]. In a closed isolated system the heat will eventually, however unlikely it is, find its way back to the original starting points of voltage and data. In the meantime all similar energy states will be explored. In fact we have only been able to set the whole experiment in motion by having initial control over the distribution of information within the system, which determines the foregoing transfer of information (internal) through control processes. The concept of isolation can be of use even with large systems like organisms, and is a powerful proof of determinism. If, as some claim, organisms are particularly capable of independent or 'random' action, an isolated organism will not have a predictable response. The consistency of animal (and human) responses to environmental isolation is evidence to the contrary. Another type of isolation is removing a pre-existing limiting or constraining routine and observing the organism's actions. This is in a sense an "inverse" relation where previously the organism could not fully exercise "autonomy" and now is capable, a true random organism might take any appropriate action while a deterministic organism will tend to only behave in its characteristic ways, which should be to a large degree predictable and explainable.

Symmetric Experimental Approaches

If symmetry theory is of relevance to the world's functioning, it is logical to reason that there exist experiments that take advantage of symmetry by designing probing methods that can measure otherwise inaccessible or copmletely unknown parameters while cancelling out all external effects not of interest to the measured parameters. This is an extremely powerful method that I believe defines most of our scientific knowledge and understanding (one of the first things in an experimental design is to separate studied system and external world on some basis), however at this point my knowledge of the theory is only sufficient to list the most concise and obvious examples of this approach - which I call "symmetric approach". What all of them have in common is that they provide deep insights on what variables are important at "only" the price of accepting the premises of symmetry theory.

  1. Differential: if two systems are known to be precisely equivalent, their behavior is also expected to be equivalent. Thus any differences in behavior between systems (that evolve in truly equivalent conditions) will reveal non-similarities. These effects will grow exponentially (see Information Theory/information exchange in gases) so this is a very powerful method for revealing exceedingly small differences between systems, even though most of the time we can only measure a vanishingly small part of the difference (we don't control all system boundaries) leading to the illusion of information or signal "loss".
  2. Equivalence and reversibility: if two systems are equivalent, whatever is done to one must be done to the other to bring them back to equivalent (or alternatively must be undone on the original). THis is a way to quickly reveal physically relevant variables (they must be fully conserved if an action is done then undone [f1]) and study systems where processes are too fast/inaccessible. An example of the former is thermodynamic heat engines: at time t0 and time t1 the engine is in the same state, so anything done between t0 and t1 must have been undone in the same time - this tells us that physically relevant quantities describing this problem must have no net "flow" (and that entropy is not a fundamental measure but an emergent one since it is not conserved). An example of the latter is a bomb calorimeter - we can't directly measure the chemical energy of a sample, but we can easily measure how much heat is removed from a sample to return it to its original temperature (and if we wish, how much heat+info to put in to get it to its original *state*) - using equivalence and conservation we can equate the chemical and thermal+info quantities. (differential calorimetry uses both this and the above differential concept)
  3. Comparison and relative quantities: when the concept being studied is poorly defined, we can make significant progress by measuring relative differences. The existence of this approach stems from the lack of non-conservative fields in the world (if those existed, all relative comparisons would become 'circular'). For instance, when trying to understand what constitutes "information" I considered how one might tell, of two systems, which one had more "information". By iterating on this, I might end up with a definition of information although I didn't start with one. The power of this approach again lies in being able to "remove" the effects of the rest of all other variables when two systems are compared against one another directly.

In a real world experiment, we seek to measure a particular quantity. By measure we mean compare two quantities, and by compare we mean using a minimal-disturbance method to tell whether one is larger than the other. What is it that we actually measure, though? Our experiment interacts with the whole universe, so how do we get any cogent data? We do so by changing the experiment in a known way and carrying it out again (the 'independent variables' of high school science class). What this does in an information theory sense is perform a XOR: result = (universe + experiment_1) XOR (universe + experiment_2) = change from 1 to 2. The XOR cancels out all identical effects that happened in both experiments. This is not just data processing - but rather how the actual physics of the experiment function, to give us any usable value. The lesson is that we will measure all effects of a given change, so we must be sure that in the measurement,

  1. The state of the universe doesn't change as far as it affects the experiment
  2. The change we imposed is adequate and sufficient to show the effect we wish to measure but not anything extraneous

We can try to 'prove' 2 with theory (though the theory depends on its own set of experiments to be accepted and interpreted/applied 'properly'), but for 1 we must use averaging, repeatability, and a general hunch that most of the universe in fact has little effect on our experiments - at least on our timescales! This is quite surprising considering the size and energy of the universe, nonetheless universe-centric measurements (which seek instead to eliminate all terrestrial change, result = universe_1 XOR universe_2) are some of the most difficult we can carry out (neutrino detectors, Hubble telescope), and the general consistency of terrestrial experiments is further proof of this point.

Consider many common measurements - voltage/current, temperature, pressure. These give just one number, despite the uncountable complexity of the real measured system - all the different electric and atomic motions that always take place. This is because the measurement devices are specially designed to have low sensitivity to every aspect of the system except what is being measured. Although millions of electrons pass through a current transformer, all along unique paths, only the total effect is seen at the output - giving a measure of current. Consider 3 electron paths:

  A B C
 |   |
 |   |
 |   |
  A B C

A and B and C start and end at same points, but go through different sections of a current transformer. Consider A->B a perturbation: its effect on the measurement will be tiny as it stays wholly within the transformer core. If B->C was a perturbation, its effect would be large (exiting the transsformer), so the system is designed to disallow this (with the wire insulation keeping an electron from traveling outside the desired path). Consider on the other hand perturbations to electron speed along either A or B: the effect would be large and consistent, as desired. This bears similarity to Feynman's all-paths view, but applies also to 'big' systems and describes why 'complexity-reducing' measurements are possible. Non-measured information here sees either 0% or 100% exchange (this is further elaborated under Information Theory).

In the modern physical theories, the approach inevitably leads to "divide and conquer" - the behavior is explained by splitting up a system into moments, parts, atoms, and particles. Even virtual entities like holes and dislocations play a role. But for a unified theory we should step back and consider the bigger picture. Dislocations are not "real", they don't arise because of some imaginary entropy - that is simply how a crystal structure *is*. We like to think of perfect crystals in simulations (DFT - periodic boundaries) but in reality, when all the quantum interactions between all atoms in a real crystal take place to establish a huge quantum wave representing the whole crystal, the wave will include dislocations. This is how the crystal behaves - instead of seeing a defect as its own feature, we see defects as inextricably linked to crystal structure. In other words there are no real BCC or FCC lattices, there are no real infinite boundary conditions, and we have no intuition for how "perfect" things actually appear - all real things exist as imperfect things because that is how they are in purest form. It is our folly to assume that mathematically perfect things are the underlying blocks. Everything becomes interconnected, and cannot be simply split as in a "divide and conquer" theory - the atom is not independent of the experiment, which is not independent of the experimenter. We only have the direct experience of what it "feels like" to *be* the electrical activity in a human brain [Lehar] but if we had more consideration for what it is like to be other systems, our theories would become more complete.

Based on similarities between models in physics and elsewhere, I propose that a system, in a most general sense, is defined by three components: symmetry, asymmetry, and information. This definition fully defines a system, but such a system may be interpreted in many different ways - the underlying symmetries and information must be conserved. For example:

  1. A resistor system, V=IR. In this system we start with a 3-fold asymmetry, which makes 3 variables: V, I, R. Then we add a 2-fold symmetry, V=IR, which reduces the degrees of freedom/information content from 3 to 2 variables. Finally, the amount of information required for the system is 2 variables. The third variables is obtained by the symmetry relation.
  2. A constant-velocity particle, x=vt. By the system construction, we see this is equivalent to the resistor case above; despite the different interpretation results of both systems are interchangeable.
  3. A 3-phase motor connector (a,b,c) with motor direction d. There is a 4-fold asymmetry for the variables. Then, we can determine (ie) c=f(a,b) and d=f(a,b,c)=f(a,b) thus a 2-fold symmetry reduces information content to 2. These 2 variables however carry less information (only base 3) compared to above. The direction d is only base 2. This makes us ask what fraction of a degree of freedom is carried by each variable. Knowing d reduces the possibilities by 50%, so maybe I(d)=0.5. Knowing a or b or c makes a full degree, so maybe I(a)=I(b)=I(c)=0.5. Normalizing to 2 variables we have I(d)=I(a)=I(b)=I(c)=1 so any 2 are required to define the system.
  4. A balance with two masses (m1,m2) that outputs a reading r that is (-1,1) depending on whether m1>=m2 pr m1

In conclusion, we have assigned 1 degree of freedom for a "real" number, something that will need to be examined later. The information content of a "real" number is a cause for concern, unlike the information in a binary digit which is straightforwardly 1/2. The above descriptions operate on the assumption that information within a system is always conserved. It is plausible to think of a system where this is not the case, such as a hash function - the output depends on all parts of the input but carries much less information. This will be considered in later chapters.

[f1] I used this in the attempt to define the information content of a function - namely when repeated differentiation leads to a set of numbers. This of course assumes that the information content of a number is a more easily understood quantity.

[f2] By external influences I don't mean the things we must keep constant - those are influencing variables we choose to keep constant - but rather those happenings outside the experiment sphere that come to interact with it during the experiment beyond our control - neutrinos, gravity, air molecules, god's hand...

[f3] this will be explored in more detail under Information Theory

[f4] to me, there are all sorts of ways to use the switches on a control board; to a trained operator only a small subset is known to be valid/useful, so the choice of possible actions is relatively limited - thus making the system appear less complex

[f5] freedom implies some aspect outside the action of the universe so is not consistent with symmetry theory. To make sense of 'freedom' we must define system boundaries, and then a degree of freedom is something where a system can be affected by its surroundings.

«« 1. Models and Theories   2. Observers, Objects, and Existence   3. Space and Time »»