Deterministic World

Part 1. Physics

1. Models and Theories

Our understanding of the physical world is based on mental models. What exactly do I mean by a model? It is a mental construct, a lengthy set of intuitive rules that lets us look at a real-world situation using our senses (touch/sight/hearing) and create abstract symbols describing what is going on. With a model we can even make up symbols to describe what 'could be' going on in some imagined situation of interest. The model also prescribes what we are expected to do with these symbols, and how such actions correspond to the real-world situation the symbols represent. Finally, the model prescribes a route for conversion from the symbols to a real-world situation, allowing for predictions or verification. So in the sense of a physical model: I see the moon from where I stand on the earth (my sense experience of the real world), and Newton's model of gravitation says I should represent the earth as a point and the moon as another point (abstract symbols), then based on the distance between the points and a formula and conventional math rules I can alter the points (what to do with the symbols), and finally from these altered points I can predict where I will see the moon next or any other 'thought experiment' this model is capable of (back to sense experience of the world - accurate prediction is most useful/desirable).

What we experience on a human level, the basic feelings/qualia of "what it feels like to see", or "what it feels like to hear", or "what it feels like to move" are also symbolic interpretations of external reality made automatically by our brain, in fact the key constitutive elements of consciousness (this point will be considered later in the discussion on consciousness). Thus we are all born with a genetically programmed model of the world: what we might call an 'intuitive' understanding. Even with modern technology making a bipedal robot that can walk is beyond 'state-of-the-art', yet we do this task with ease. How? This is because our brains automatically convert information from the senses into abstract symbols (which we experience as seeing/feeling motion/muscle load [f1]) and then use these symbols to make some sort of calculation/manipulation to tell what will happen next in the real world, and how the body can be adjusted so that what happens next is what we want to happen next (ie stay upright and move forward/avoid obstacles). Then the brain converts from the symbol-based result of this incredibly complex calculation to real-world actions with the help of the nervous system/muscles. We can (and do!) imagine walking on some strange new terrain and plan out our body movements ahead of time, working wholly in the symbolic/abstract realm of the intuitive model: its 'language' is feelings and qualia rather than numbers and letters on a piece of paper so we cannot communicate or write down the rules in any spoken language, but it is nonetheless an extremely powerful model that would put all our textbook "models" to shame if anyone found a way to write out all the underlying equations. The intuitive model of the world "just works" - we never need to learn any formulas to be able to walk, but only because these formulas are learned automatically and intrinsically by the brain due to the way it is programmed/wired. The brain is also designed to modify the intuitive model to be more accurate by learning new things, but it learns by direct sense experience - for example I can learn how to snowboard or live in zero-g on an intuitive level, but I cannot have an intuition on quantum mechanics or relativity though I will gain an intuition for how and when to apply different written formulas (because I practice writing formulas but can't practice experiencing quantum/relativistic effects).

Our intuitive world model can do tremendous things, but it can also lead us into situations where its predictions do not match the external reality we observe.

A typical "event" in human life terms is about 1ms = 1e-3 seconds. A typical "event" for an atom is about 1fs = 1e-12 seconds. A human lifetime (100y = 3e9 seconds) corresponds to an atom's "life" of 3 seconds. Earth radius is 6.3e6 meters, while Bohr radius is 5.2e-11 meters. Suppose a "human radius" of 1m, then if an atom were human-sized, a human would be 2e10m, or way bigger than the earth; a grain of sand would be the size of the earth to a human-sized atom. But atoms are much more mobile than humans, because in a few seconds they can "explore" a larger area than a grain of sand.

It is 'obvious' that the earth stands still and the stars surround it in a revolving celestial sphere. Yet some stars move in strange ways (planets), and the sun itself shows annual variations unexplained by intuition... careful tracking by means of non-intuitive symbols (letters, numbers) shows that our intuitive view isn't perfect. We like perfection, though, so we make up models that we apply [f2] and consciously tailor/modify so that they make accurate predictions even when our intuition is unclear [f3].

I have no intuitive idea of what a rocket in space will do, and even if I had an idea I wouldn't be able to clearly or precisely communicate it. By applying laws of physics and mathematics I can use symbols to predict what will happen to the rocket, and by relating causes to effects I can even make a control system to get the rocket to do what I want. I can also communicate this to other people and machines. This is the role of physics models - they have limited applicability (still relying on the huge knowledge base of our intuition to apply to real world) and low computational power (until we invented machines which had these models intrinsically as their modus operandi/consciousness - computers), but they are exact, specific, and can be communicated/commonly agreed on/precise/saying the same thing to all participants. These latter features have allowed physics models their prominent position, because despite their great lack of flexibility and power compared to the intuitive model, they allow shared and specific knowledge as opposed to the private and non-communicable experience of qualia/feelings. By this definition our earliest models were basic spoken language - coordinating group actions, the first beginnings of religion [f4] (communal beliefs in how to explain things that intuition cannot, such as when it will rain). [f5]

Our abstract models then can be seen as containing both mathematical truth and a knowledge/interpretation component. The mathematical truth, to the extent it is verified by experiment, is independent of the way it is interpreted or of the intuitive picture built. The latter is open to interpretation and is what I will focus on in most of this section, while seeking agreement with existing experimental results and connection to existing mathematical statements.

Power in models comes from general applicability and in a fractal manner, such that basic structures are defined once and then can be rapidly used. Consider a video game engine: once having written its code, it becomes possible to make individual games much more rapidly and consistently, by using all the knowledge (and time) that has been put into the engine and re-casting it in a more abstract and impactful manner. Similarly with assemblers and compilers and GUI-design programs: increasingly more power is gained as abstraction is increased and code base is re-used in a logical rigorous scheme. This is what we have in a physical model, a way to describe many real-world phenomena in a reasonably understandable and human-comprehensible manner, by using a single underlying base of knowledge and casting it in different situations. Just as the video game engine places limitations on what type of game can be made which don't exist if programming fully from scratch, having a model places a limitation on the sorts of things we can describe (or predict) which don't exist if starting with "no axioms". In any case if the functionality of the available model is insufficient to do what one wants, one must make alterations at the axiom level which may mean tremendous practical effort to create a new model. I try this questioning of axioms from time to time in this text, though the creation of a new model is left for later - this is why there will be very few formulas and mostly metaphysical/philosophical considerations.

We see in the present way of life that the work of most people is far separated from nature by various human inventions. Namely, people who interact directly with nature (miners, farmers) now rely heavily on mechanical aids, but most of the population can work on matters entirely unrelated to nature (administrative work, art, sports). We are capable of making progress because we use increasingly refined materials (metals, plastics, semiconductors), but the extraction of such materials from the natural disordered state enables the construction of new classes of devices and machines - the essence of this consideration is that purifying and rebuilding *materials* (chemicals, molecules, elements, particles) enables the creation of new physical structures previously unattainable. But these materials are extracted from the messy/disordered natural state. Could the same idea apply to knowledge? For instance, from the overall environment of random ideas we have extracted the concept of musical harmony/progression and artistic elements. In math we have extracted the natural numbers, enabling number theory to be constructed. Later, we extracted real numbers, extending the mathematical theory (with accompanying advances in *physical* and engineering abilities!). Are there other 'elementary' structures that need to be extracted? Are numbers really elementary or just the most obvious 'natural' material? Is it possible to purify [f6] numbers into underlying constituents that can then be used to solve new classes of problems previously unassailable? What is the role of a structure like set theory, complex/negative numbers, and quantum wave-vector notation? By analyzing the world in terms of specific knowledge-representations/memory structures can we find more insights?

Since numbers are 'obvious' and have been around for the whole history of math, I would argue that they are a 'natural' or messy/unorganized material, like rocks or wood. My claim is *the universe doesn't care about numbers*. Purifying numbers into something more essential, like purifying sand into Si and O2 or water into H2 and O2 would get us closer to a universe-relevant theory and give much greater power to theories (along with a possibility of unleashing more energetic physical structures). The more essential elements might readily explain quantum theory, fractals, iterative processes/'games' (like game of life), and relativity. The nature of physics itself would be encoded in the structure of these elements.

Something that humans often overlook, which comes up as a 'revelation' in physical laws like Newton's mechanics, is the perfect symmetry of the outside world. We see that for the world to operate stably there are strict conservation requirements - every little 'insignificant' thing matters, even when it is most tempting and easy for humans to ignore the small effects. Unless this is true, the universe would not be stable (energetically). Another related concept is that of similitude. Here we need to recognize that in human experience we are essentially never exposed to truly equivalent objects (for we would not distinguish them otherwise). Saying that two objects are equivalent (at least in some aspect) like a=b is only a mathematical statement and does not create a true equivalence for the objects a and b can still be distinguished by their different subscripts or locations on a page of the proof. Being able to create a true equivalency is a very powerful concept for then any operation carried out on the equivalent object are equally applied to all possible interpretations of the object - this is the basis of quantum computing, and also mathematical truths and definitions of a system. In algebra the letter 'a' equivalently represents any number so an equation like a=b applies to all numbers, as well as any subsequent operations on a.

What, then, is a number? How much information does one number represent? We might claim that a number represents a single value out of an infinite set of possibilities, and thus reduces ∞ to 1. A binary number would in turn reduce ∞ to ∞/2. It is a bit troubling that this represents an infinite amount of information, if the number is real. Another problem we encounter is similar to Zeno's paradox - how can we get, in the real number system, from x to x+dx? The only way we interpret this is by quantizing the problem (in the problem itself, or in our method of solving it in our real-time physical world), thus removing the essential feature of real numbers: they are disconnected! So for instance in x+1 = x+lim{dx->0}{(1/dx)*dx} the '1' and the limit work as long as we choose to quantize by some finite dx (and '1' is already quantized). We can only approach a real number, never reach it - for doing so would require the divergent dx=0! We see that real numbers are in fact a 'beast' - they store ∞^2 information if an integer stores ∞ - viewed as an infinite set of integers "after the decimal point" (reversed in digits) for each initial integer with the +- signs canceling out/degeneracy just like with complex numbers which extend this to ∞^4. In fact we could interpret real numbers as ways to deal with multiple scales - the decimal point marks the scale we are most concerned with, then integers on the left of the decimal point specify steps on larger scales while integers on the right specify steps on smaller scales. Then the relevance of real (or real-like) numbers in physics makes at least practical sense in terms of writing down experimental measurements with a given scale-setting unit. We arrive at the myth of the real number "line" as taught in grade school math classes but pervading thinking even in college. On this line, it seems straightforward to move "a bit over". But we see that to arrive at the real number case, we take lim{dx->0}{x+dx} = x thus it is impossible to "move" to the next real number! There is no connectivity in the real line - it would be more appropriate to consider it an infinitely fine dust. Consider increasingly fine quantization visually:

0     1     2     3
___________________     "Line"
.     .     .     .     Integers
.  .  .  .  .  .  .     Half-integer
. . . . . . . . . .     Third-integer
...................     Sixth-integer
Fig. 1-1

Taking a limit of the above procedure to infinite quantization produces the real number "dust". We note the formation of a binary tree in the representation which suggests a scheme for covering ∞ data - each binary digit will reduce the data by 1/2 so we have the sequence ∞, 1/2 ∞, 3/4 ∞, 7/8 ∞, 15/16 ∞, ... With increasing number of bits. Perhaps then it is not surprising that 32-bit values are sufficient for all sorts of physics calculations, even across scales. From the binary tree 'dust' diagram, we might suggest that a number is represented by an infinite sequence of (say) binary digits, having a beginning but no end. This sequence is unique for each number but is always infinite, even for easy to write numbers like '1' or '2'. Furthermore, it is not possible to change from one sequence to another continuously - this can only be done by interacting with another infinite sequence in a discrete/discontinuous manner. This interaction can be a standard +- type operation or special operations that are equivalent to an infinite sequence (this shows that series can be evaluated to a number). It is nevertheless possible to compare two such sequences in a relative sense - and real world experience shows that usually only a finite length of the sequence is sufficient for such comparisons. This interpretation has some interesting parallels to the nature of information in a physical system - namely persisting in time (since continuous deformation is disallowed) and changing only by interaction with other information from extermal systems.

One of the "tell-tale signs" of underlying order in the universe, giving hope for an overarching theory, is observing similar phenomena across a multitude of length and time scales. We intuit that such phenomena cannot be an artifact of any specific system and must be properties of the universe as a whole. Consider the notion of momentum. What concerns us at present is not a precise mathematical formulation, but the fact that this notion can yield insights from motions of galaxies (or at least planets) to the smallest sub-stomic particle (or at least electrons). Maybe 'momentum' is not a physical reality but a man-made mathematical object, but nonetheless it proves so useful and ubiquitous that we claim it represents some feature of the universe. Another notion that pervades all scales of measurement is energy, mass, size. And all objects tend to persist in time.

In the above observations we have largely made use of mathematical relations to establish the claims (from E=mc^2=hw to M=\Sigma*m) and to make sense of experimental data. We must conclude that, since math is seen to apply so ubiquitously, it must reflect some fundamental property of the universe. Thus studying how/why math (in its present written form) functions will yield insight on actual physical matters. Further taking account the ideas of information theory and supplementing them with math-based observations is akin to using the rich experience of nature (mathematicians) in developing some new idea. We proceed to this, beginning with the notion of the information content of a function (the information content of a number will be adressed later in this chapter). It is easy enough to write a function like x=y or y=x^2, but how much information does this represent? Looking at this as the outcomes of all experiments on the system (function), specifying a function must require an infinite number of points, one f(x) result for each x measurement. And yet we have written it down in only 3-4 characters, which is a similar paradox as that encountered with numbers. What is missing from the physical (finite) 3 characters on paper is the associated human understanding of math that translates y=x into a relation that carries ∞ information. However for a physical model that is not human-centric, we must represent the idea of the system y=x without relying on the human constructs of math, and in a language that the universe "understands". Most directly this would be a physical system where indeed y=x (quantum computer, lever, gears, transformer), and yet we retain hope that a mathematical construct of sufficient richness may be developed. Since we see that even a simple function like y=x carries ∞ information (like the integers), the 'natural' representation or object for such a function must also be a specific kind of infinite sequence with a beginning but no end which can usually be characterized only by the first few terms in the sequence (like claimed with numbers above). We go further and consider the function as a system, in the sense that it provides asymmetry and symmetry relations (implicit from the requisite human understanding of the mathematical construct) though normally no "information" content (piece-wise functions, in the limit of infinitely many pieces, are in a sense entirely information - this perhaps suggests a unity between information and symmetry/asymmetry). Thus "x=y" gives an 2-fold asymmetry (x, y) and a 1-fold symmetry (=). As before we are left with 1 degree of freedom which is either one of x or y to fully specify the system. Something like "x=y^2" (with real numbers) also has a 2-fold asymmetry but the symmetry is not 1-fold as (+y)^2=(-y)^2. Thus the symmetry only specifies a subset of the system. It is left for the future to determine how the information content of such a system is to be represented, but we note that x>=0 reduces the possible states to 1/2 of the 'unconstrained' x=y, with another 1/2 required to determine the sign of +-y.

As a final point for the time, of using math to investigate reality, we note the ever-present equal sign (=). This makes usable formulas, that work at all scales and with all known phenomena, by applying the concept of equality, or indistinguishability in a physical system. We conclude that there must be a fundamental "equivalence" symmetry that will help us make sense of natural systems, ie that such a symmetry is a physically real concept.

What is the information content of a number? We seek to establish these things:
- Numbers, as commonly used, are meant as ratios of underlying physical quantities/phenomena
- Numbers, like any other alpabet as humanly used, carry information that is context and system-dependent (information content can change based on type of number and type of application)
- Thus to establish the information content of a physical system we must use some strict definitions. One might be the maximum number [f7] of unique experimental results the system can provide, for any single experiment. This will later be seen to be the total rest-mass energy within the boundaries of the system (a tremendous amount!).

As mentioned earlier, "the universe doesn't care about numbers". Yet in making such a statement we have to justify why the use of numbers in models within sciences and humanities has worked exceptionally well to improve human welfare. This is not just a side effect of their generality (1 can represent an apple, or a meter, or a force) since it is not sufficient to only have logical rules to predict or control a real system. And if we assume the universe is not necessarily simple, this implies that numbers are in fact complex entities that only appear simple to us because of our familiarity (both in learning (school, college) and in being a part of the universe in which numbers are useful). Numbers are commonly used in "models" so we will try the analysis of those as a starting point. We take as an axiom the uniqueness property - namely that a single, specific outcome is the result of a similarly specific cause, and that the two necessarily occur together. This suggests that in physical processes something is conserved - for our purposes the conserved quantity is "information". The concept of conservation applies not only to "data" but also to properties and rules and algorithms (in math, for instance, if a function is composed of sin and cos, its derivative is also made of sin and cos - the property of the oscillation is conserved although the function itself changes when the derivative is taken; there is a "signature" of the sin and cos content with any other conservative operation applied to the function). Thus we reason that some semblance of underlying universal laws is "conserved" and defines the physical world of "big" objects that we can observe. Again by the uniqueness property we can reason that physical laws of our models are based on direct representation of universal laws, and with some thought it may be possible to find the latter from the former (since both also conserve underlying information).

At the same time the more simple definition of information - the results of experiments - is not conserved over time. As will be argued later, "god doesn't know the future" for otherwise there would be no need for the present. Time represents the evolution of potential experimental results according to the universal laws. The uniqueness property then means that in a closed system (that does not exchange information across its boundaries) the observed experimental results will take on a cyclical pattern (like any operation x=f(x) mod n). We postulate that all models conserve information, as a consequence of their development to describe physical processes and the physical processes' operation under the uniqueness principle. Of course the human aspect of providing and analyzing the input/output of the models need not conserve this theoretical aspect of information (the underlying information is conserved always, thought perhaps not in ways useful to us - for instance as heat when a bit is overwritten), but the model's internal operations to get from input to output will be found to be conservative. Before I argued this point based on the practical consequence that a non-conservative model would need to either "make up" or "ignore" input or output to operate. Now I make the same claim from more physical grounds that the model describes (and operates in) an information-conserving world, and seek to provide experimental evidence for this claim.

Mathematics, and mathematical routes to get results in models, are information-conserving. This is seen by the requirement for n equations of n variables, n boundary conditions for n-order differential equation, and n length vector in an n-dimensional space (regardless of Cartesian, radial, cylindrical representations). It is possible to supply inputs that are redundant of insufficient in information for the desired algorithm, which will result in undefined answers like 0=0 or ∞, but this only shows that the mathematical tools will not operate in a way that will not conserve information. To overcome results like ∞ we need to create a new information structure that will enable the information to be conserved (such as cardinal notation for sizes of ∞, imaginary numbers, and even rules like "when the sum of independent functions equals a constant, all the functions are individually constant" [f8]).

However, what happens in the case of most computational models - when an extremely large (GBs of data) simulation runs and results in maybe 1 or 2 numbers that are actually useful to the operator? This is similar in principle to a hash function and is an example of a non-conserving operation, suggesting that more efficient computation is possible. Consider Lifshitz theory, in which a spectrum is integrated to obtain a single number - the Hamaker coefficient. A similar example is when a particle's velocity as a function of time is integrated to find its ultimate position. In both cases a continuum is reduced to a number by an appropriate algorithm. This is not an information-conserving process (equivalently it does not follow the uniqueness principle) since multiple (infinitely many) continua map to the same number and given the number it is impossible to return to the continuum. However it is possible to invert this step if the analytical integral is carried out. Thus:

x_t = \int{0}{t}{v(t)*dt} -> irreversible
x(t) = \int{v(t)*dt} + x_0 -> reversible [f9]
Eq0
c=a+b -> irreversible
[c]=[a+b]
[d]=[a-b] -> reversible, as is [c,a] or [c,b]
Eq 1

Now we note the effect of differentiation is to reduce a function to a number. So d/dx(x) = 1. While it is difficult to have an absolute measure of information content, we can take advantage of the equality to say the information content of the two is equivalent. In the reversible configuration we have:

[f_0, f(x)] -> [\int{f(x)*dx + f_0}] or [f(x)] -> [d/dx f(x), f_0]
Combining the two gives
[f(x)] -> [d/dx f(x), f_0] -> [\int{f(x)*dx + f_0}] = [f(x)]
Eq 2
Thus the information is conserved throughout this chain. Since d/dx (ax+b) = a, we can say differentiation acts as [ax+b]->[a,b]

So we have found that the information content of a polynomial function is equivalent to that of its order of numbers along with the (considerable) information of the differentiation algorithm that reduces it to that set of numbers. As reasoned before, the information content of two algorithms that can reduce the same data to a (set of) numbers is equivalent also. Thus for instance the Taylor expansion is a direct example of the equivalence. Yet for sin and cos functions we find the expansions to require infinite sequences of numbers. Are oscillatory functions then more information-dense than polynomials? The issue lies in the algorithm - the polynomial algorithm is not capable of representing sinusoids with finite numbers, but a Fourier transform is - it should be shown that the two algorithms will not reduce the same set of data to a number (unless it is already a number like f(x)=0) so their information content must be different. For now we are happy that it is possible to define the information content of a polynomial function in this way.

Now consider what defines a number. We might start with integers. On a 2D grid we may define a point as [A,B] where A and B are integers. But similarly we may define an algorithm that gives us [A,B] from a single nonnegative integer C: The algorithm starts at the origin [0,0] and spirals around it C steps until it gets to the desired point [f10]. Then we have [C] -> [A,B]

  | <-  .
  |  0 ||
  ->  ->|
  Fig. 1-2

We can thus map 1 nonnegative integer to n integers in the discrete space. Does this mean we have gained n∞ information from ∞/2 starting point? This of course comes at the price of the algorithm - but in the limit of either large n or large numbers of n-sequences (as above we might define 3 integer pairs as [A1,B1], [A2,B2], [A3,B3] or as [C1],[C2],[C3]) it is seen that the algorithm's information content is overshadowed by the reduction of n integers to "1/2". This is the essence of a number - a number can represent 1 unit of "∞" information. Thus a real number is equivalent to an infinitely long series of digits. Indeed we write numbers this way - as a sequence of the integers 0-9. One number specifies a single point in a continuum, and a continuum is a combination of infinitely many numbers, ∞^2. Thus a function like y=x, which lies on a plane, ought to be represented by 2 numbers, which is what we get with our earlier construction [ax+b]->[a,b] here a=1, b=0. So far we have:

Number <-> Point <-> Flat line
2 Numbers <-> Tilted line <-> Sine curve <-> Flat plane
3 Numbers <-> Tilted plane <-> Curving (quadratic) line <-> ?

In real systems, of course the size (max capacity) of the number is of prime relevance - this was skipped over in defining the number above, and would be worthwhile to return to as it will yield different equivalence relations, ones that arguably make more physical sense than the purely mathematical conclusions above. A final point concerns the powerful tool of "independent" systems - by saying that things which are the same before and after a certain operation, in combination with the uniqueness principle we know that the operation was independent of those things. This makes it possible to make sense of otherwise very complex events, so before:

[f(x), f_0] -> [\int{f(x)}+f_0]
We recognize the more complete version:
[f(x), f_0, paper, me, integration algorithm, rest of the universe] -> [\int{f(x)}+f_0, paper w/writing, my thoughts, integration algorithm, rest of universe]

Namely in this operation the integration algorithm and the rest of the universe did not change as a result - therefore we are assured that the information content of [f(x), f_0] and [\int{f(x)}+f_0] are actually the same with the assumption that the effect of writing it on paper and thinking about it has not exchanged the 'abstract' information represented by f(x). Now, numerical integration bypasses the integration algorithm and instead repetitively applies an unchanging numerical algorithm - but in the process the *physical state* of the computer changes rather than the nature of f(x). Thus we map a mathematical variable t in f(t) to the *physical* time that the computer executes the algorithm and thus find the answer. Schematically:

[f(x), f_0, computer state, algorithm, rest of universe] -> [f(x), f_0, changed computer state, algorithm, rest of universe]
Something that can take infinite time (or distance if we map x to real length) to do in this way can be done finitely by use of algorithms and numbers - that we can readily handle such powerful entities must not go underappreciated [f11].

In a physics curriculum, students are exposed to many conservation principles - conservation of energy, mass, particle number, momentum, angular momentum, entropy/enthalpy/heat, and equal and opposite force reactions. Specifically here I will concentrate on the conservation of energy since "energy" as a concept has some mystique to it - one may ask "what is energy?" similar to "what is time?". It is always on my mind when I hear a statement like "the two chemicals react giving this product and energy". After studying the different manifestations of energy - thermal, gravitational, nuclear, potential/kinetic, electrical/magnetic, chemical - it is my conclusion that "energy" as such is a human-defined construct that does not have an elementary connection to reality. Energy is conserved because energy has been defined to be conserved - if it is found to not be conserved, we either look for other paths for energy flow, or re-define the definition of energy (how thermodynamics came into being - someone realized that a change in temperature required energy). However the fact that *something* is conserved (and now we know enough about "energy" to know the mathematical form of that something) is indeed an elementary physical feature of the universe - this forms the premise and justification for the symmetry theory. But we might already have a hint as to what that something is - and I claim that it is the *momentum* of particles. This is experimentally verified for thermal energy (since thermal conductivity follows predictions of electron motion/phonon transfer, and thermal capacity follows QM rules for allowed vibrations/rotations/degrees of freedom for systems - Bose-Einstein condensate as an extreme example). It is obvious to extend this to moving particles for the concept of kinetic energy. Since gravity affects the speed of particles, a mechanism of action for gravity might be in line with this concept. The trickier part is handling the case of stationary particles storing energy - chemical and nuclear, as well as photon-excited electrons. Momentum is conserved on all known scales (even within QM) so defining conservation will be OK. The difficulty lies in extending a finite momentum to a non-moving particle that has (ie) chemical energy. We should think of the particle as a "self-sustaining" or "warped" wave that can be localized in space despite perpetually moving, or moving in a circle which represents this underlying momentum [f12]. I think it is no coincidence that the model of electrons as standing waves on a string surrounding the nucleus had as much success as it did - perhaps electrons really are vibrations that are resonant around the nucleus. Then momentum should include the width of peaks and how rapidly they travel around the nucleus - which will be related to the classical m and v. To escape the nucleus the electron wave must be "unwrapped" which will somehow cause the momentum to fall to the clasically observed value (binding energy/work function).

Some examples of "Everyday Numbers" which will hopefully help understand the nature and purpose of numbers: From the preceding we may find some categories:

To make use of measurement and mathematical numbers, which are infinitely precise, we convert them into counting numbers - by applying boundaries or a minimum incremental unit (rounding). Thus we might say pi=>3.14, length=>1.5km, speed=>30 miles/hr - the true mathematical and measurement numbers can only exist as symbols and physical measurements while the things we write down are necessarily approximate versions. Of course counting numbers are aldo infinitely precise - but to store them in finite memory we have to "chop off" both the low end (decimal point) and the high end (overflow) and to use this conversion we have to ensure the measurement/mathematical numbers satisfy our assumptions. These assumptions always exist, whether conscious or not. Yet there may be some cases in which the mathematical/measurement numbers can be uniquely connected to a counting number equivalent: exact relations of equivalence (unity, 1), discrete quantum states (spin up or down), number of oscillations/beats between harmonic waves (gives an integer). These give the most useful/powerful computational results and account for the popularity of digital technology. How did we even get started with measurement numbers? It will be necessary to refer to the earliest scientific writings to understand the origin, and there a difference in use of numbers may be noted (for one, a lot more emphasis is placed on verbal rather than numerical descriptions).

Theories are a keystone of science and its application to the real world. Yet theories are not all-powerful, and their limitations must be recognized when searching for a new theory or trying to define or explain an experiment. We can't help but think in terms of cause and effect, actions and actors, so theories, especially how they are taught, are often used in these ways. While all the theory says is F=ma, we are led to think of a body acted on by a force and then accelerating, and heavy things accelerating more slowly. The theory made a mathematical prediction (predictive power) and we have assigned to it an explanatory prediction (explanatory power), something believed to describe "how the world works". Good science claims that theories with high predictive power are to be used widely and trusted. Any associated explanatory power is usually a convenience, not a defining factor (one high-profile example is the choice of helio-centric geometry instead of epicycles, becaue that explanation was more 'elegant'). But it would be flawed to say that the latter has no effect - we are human and like to understand systems intuitively, so we seek explanations from theory, and teach theory more convincingly by offering it as explanation (why is the sky blue? Raman scattering theory...). There are limitations in doing this when the goal is a deep understanding, since much of theory is more-or-less arbitrary (driven by convenience or 'elegance'). For instance, my theory is V=IR. I could write it as well like I=VQ where Q=1/R. The math outcome hasn't changed at all, but the interpretation is different (and perhaps the path to the solution is also different - easier or harder). So, the variables we hold dear in current theories, like mass/energy/momentum, might not be the 'fundamental' variables that nature cares about. They may be particularly useful for us and simplify the math, but should not be taken as defining nature's laws (there is no way to know what those might be, even). Next I could do an experiment that shows in some cases (loop of wire) V=IR doesn't hold! What can I do at this point? I would like to keep the '=' sign, and V and I seem to be consistently measurable. Does V need to be measured differently when across a wire loop? Does the loop change R? Or does my theory not apply to loops? Typically we say the latter - call the loop an inductor and say that it follows different rules. But we could achieve the same math outcomes by saying that R has an L component, making the theory more general (as done for detailed circuit analysis), or by saying that V or I must be measured differently across loops of wire because the measurements were designed for straight wires. These options, equivalent in predictive power, offer quite different explanations of the world and interpretations of what is going on.

So, there is no particular reason to believe that an existing theory actually describes world mechanics, if this is even an achievable goal and not a tautological one. Another limitation of the theory is its limited applicability to experiment [f13]. Really, before applying a theory for predictions we must satisfy all of its axioms, but this is rarely done - 'errors' are thrown out as noise. Some theories are so vague they can never be disproved: "you will get better soon". Others require too much information and are impractical to disprove: "your life is determined by the exact star alignment at birth". What of "there can be no perpetual motion machine?" This is also unprovable (while "there can be no traditional machine" is provably false - experimental evidence availability matters) at present, and can only be disproved [f14]. So, using such a statement to say "this experiment will not work, stop wasting your time" is bad science just as much as the earlier two statements of fortune-tellers and horoscopes. *Theory cannot be used to disprove experiment*, only to suggest which experiment might be more effective to pursue with limited resources [f15]. Theory cannot tell the scientist in what ways it is lacking or how it can be improved - only experiment can - so using theory to deny experimental findings is to remain stagnant and unaware of a more complete operation of the world.

I propose a "close approach" hypothesis - this is a hypothesis that is widely used in science but not widely acknowledged. It may be possible to prove in some mathematical contexts, but in physical or philosophical problems this is usually not possible - it is instead assumed to apply. There is an objective fact that a particular theory may predict, for instance F=ma. This may or may not correspond to real experimental data, based on which we determine the predictive ability of the theory (commonly referred to as "correctness" or just "validity", as in Einstein's theory surpasses that of Newton). Beyond that, we have subjective feelings about what this theory means about the world and what steps to take to improve the theory (as well as how the author arrived at the theory). This is where the "close approach" hypothesis applies. I first encountered it ingrade school with an algorithm to find a number's square root: n1=n0+(x-n0^2)/(2*n0) which iterates by n1->n0 until eventually both approach the true solution (where the name "close approach" comes from). The stability of this iteration can be proven mathematically, but I did not know this at the time and thus wondered a lot on whether such iteration is really guaranteed to find the solution. There were similar examples of such non-rigorous iterations later in high school as well, in solving systems of equations, which left me similarly baffled ("you can just do that?"). In the algorithm, there is some starting point n0, and a procedure is applied to get a "better" estimate n1. Let's apply this to theories: there is some starting point (ether theory) and some procedure is applied to find a "better" estimate (Maxwell's laws). It would be impractical to get to our current state of knowledge otherwise - we build on previous experience, "standing on the shoulders of giants", instead of completely starting over each time - a task probably beyond our simple minds. In our teaching, application, and description of theories we accept the "close approach" hypothesis, by saying that we have a better understanding of the world and by framing the world in terms of the theory. Having minds designed evolutionarily to keep track of actors and actions, we can't help but use the theory in our descriptions of the world, cementing its reality: knowing about electrons from the atomic theory we now say electrons flow in wires to represent electricity; when I think about an electron microscope I think of electrons (as individually identifiable "balls") flowing from the filament onto the sample; when I think about gases I imagine billiard balls colliding; when I think of the nucleus I imagine more balls representing protons and neutrons. None of this is really justified or claimed by the theories - which were vetted only because their predictions were close to experimental results. But it permeates our thinking [f16] - even in scientific circles people will talk of electrons and neutrons as if they were "real" and probably ball-like - this is necessary for rapid communication of complex ideas, for intuitive reasoning, but it also implicitly locks us into a cognitive framework which may or may not be a good representation of the world and a path to the "real solution". Coming up with a new theory or modification to existing theory is a highly imaginative and intuitive process (computer learning is a "rigorous" approach - but such "theories" are rather uninteresting to us as they are "meaningless" in what they say about the world), and in this process we must use the tools of intuitive thinking - including the unjustified worldview of existing physical theories (take Einstein's thought experiments as a real example). It is the "close approach" hypothesis that lets us continue on this path while believing that some day we may have a "theory of everything". This is why even now I question numbers, energy, and other deeply ingrained constructs as "artificial".


[f1] same reason we can look at expressionist or comic-book drawings and easily know what's going on while a computer program would be stumped: the symbols are what is actually represented and this is what we operate with, not the underlying photon receptors in our eyes

[f2] this application of mathematical models is on top of our 'intuitive' model, which is an interesting consideration for what types of mathematical structures we miss just because we don't intuitively think of them

[f3] this tailoring/modification, interestingly enough, is intuitive - even if we implement 'meta-models' (models that describe/create models in a rigorous manner) all we can do must point back to our intuition and our senses, a key aspect of "the human condition"

[f4] in this light, the physical conflict (excommunication, execution) between church and science is really a conflict between intuitive world models that people operate on, and an established intuitive model is highly resistant to change, causing rage-inducing feelings of frustration and cognitive dissonance, since a tiny change in axioms can mean tremendous mental processing load to create a new working model, especially for one as complex as the human intuitive model. When one feels wronged, one wants to fight back against the attacker, but with a change in the intuitive model it is as if one is wronged by the world as a whole, there is nothing to fight and yet one is attacked from every angle. This is seen on a small scale in bereavement, when the intuitive model must be updated to reflect the loss of a loved one.

[f5] the development of both the intuitive model in childhood and the language-based and symbol-based models in human history at large are all caused by the same mental drive, biologically hardwired and evolutionarily selected: the drive to learn, to keep asking why, to predict - the brain feels most content when it knows exactly what will happen next, it feels least content when it gets surprised (something happens which it did not expect). This element of surprise is inherently negative (painful), to the extent that the brain is forced to create models (consciously or unconsciously) that will make predictions such that there is no more surprise.

[f6] something like prime decomposition, or symmetry operators. Such a decomposition would *not* be evident, just like the chemical elements were not evident to the first proponents of atom theory. Finding it would require careful study of a wide range of experiments and theories.

[f7] as a ratio to 1 unique result as suggested

[f8] this rule uses the information present in the equation that is not otherwise accessible to standard mathematical algorithms

[f9] we see that in the info-conserving math we must explore all combinations, as in quantum computing

[f10] a similar method is used in proving that fractions are countable

[f11] even "simple" numbers like 0,1,2 or functions like x+y

[f12] later it will be argued that all movement must be at the speed of light, so the circle diameter is the determinant of stored energy - a smaller circle implies faster motion and faster time evolution. This is in line with the time/energy uncertainty principle as well as the huge energy content of massive particles (and that the more energetic, more massive ones are smaller in size than the less energetic ones: consider electron vs proton). In this sense that small scales equate to big energy, it may be fruitful to remember the relativistic symmetry of v to 1/v.

[f13] Energy, for instance, is what I call "that which is conserved". If we find energy is not conserved experimentally, the first conclusion is that it "went somewhere". But maybe our definition of energy is wrong - or incomplete. Notions like energy should not be taken too literally in regimes where they have not been experimentally proven, and should not be taken as fundamental variables arbitrarily.

[f14] I think a theory can be said to be proven if it works exactly as described with axioms it requires being satisfied in real experiments. F=ma has been proven for all objects humans have interacted with.

[f15] there is much benefit in experimenting in "the world lab" ie using the ocean/sky/planets/stars to test theories' effectiveness at recreating real natural effects with already known starting conditions and outcomes

[f16] I found this when thinking about electrons in an accelerator - I realized I was looking at balls! I tried to think about fields instead to align with my field theory but realized this has the exact same problem. It is something we cannot escape but must keep in mind as a limitation of our intuitive abilities.

«« Introduction   1. Models and Theories   2. Observers, Objects, and Existence »»
Contents