Space and time tend to scale together: small things move/change quickly and big things move/change slowly. This is particularly surprising if the big things are made up of small things - big things change as a time average of small things (ie atoms+electrons, macro-objects+animals). Might there be a possibility for fractal simulations of phenomena?

This also happens with temperature, ie cold things move slowly. Coincidentally, Einstein predicts that hot things, having more energy, will be heavier. I predict that hot things move faster in time and thus experience a higher gravitational pull, when gravity is viewed as a "time warp" or "time drag". This can also work with an Entropic Gravity viewpoint or wavefunction-collapse gravity viewpoint.

Some basic physical observations:- Persistence: objects tend to exist for long times and reflect a past state (system's "memory")
- Equality: objects can be similar or equivalent to each other, and yet some exclusion principles (such as in space coordinates) also apply
- Information exchange: objects that can be observed (by definition all objects) *exchange* information (ie no information sources or sinks - that have been verified at least) in a similar way to energy conservation, also following arrow of time/causality. In other words all information exchanges are symmetric (as must be energy transfers) [f1].

Nature of space: effect of action decreases with distance; long-distance communication is possible by 'concentrating' action as by an insulator-covered wire which creates a "tunnel" effect for the action and it spreads but spreads in the limited wire area. In any case all the space between 2 points *must be traversed* - no "action at a distance". This is related to entropy: energy will spread wherever it can, and energy can be "converted" whenever it can come in contact with a "coupled" level [f2] - for instance where electric and motive potentials are coupled in one level - these devices are necessarily lossless/reversible, it is the spatial spreading out of energy which is irreversible.

Earlier in this study we have made liberal use of the concepts of big and small (scales, objects), and here we desire to more clearly understand this distinction. If we are to accept causality, then the universe should operate in a sequential manner, with small effects combining over time to result in big effects. This implies that big effects are not elementary but a human construct, a consequence of a number of small effects. Similarly if we ask at what scale the universe operates, maintaining the notion of causality leads us to claim that the universe's laws must operate on the shortest time scale, which is associated with atomic motion/small effects. The laws thus operate at the "far end" of the ∞ information sequence, and over time the effects of these laws show up to us as observable, "bubbling up" the information chain. This means that generally, knowing the laws for a small system allows us to predict the behavior of a larger system, but not the other way. The laws of the universe may be very complex at the lowest level, but what we observe is a cumulative (in time and space) effect. Thus only those aspects of the laws which are additive or otherwise "constructively interfering" and not "averaged out" can be observed in an experiment far removed from the elementary scale of the universal laws. On the other hand, some effects (like gravity) are so small on the atomic scale that we must question how they fit into this model. Another question is how a small particle (an atom falling down towards the earth in its gravitational well, or an electron in SEM column) exchanges information with all other particles that make up the gravitational/electric field affecting the small particle (ie information must be exchanged, but information content of single particle is much less than that of big system, so how does the particle know what part of the system to exchange with?) [f3].

The Pythagorean theorem and the circle equations (C=2*pi*r) say something about the space of our world. It is odd that going at right angles sequentially results in a larger path than doing so simultaneously:

----- / | / | vs / | / | /

Note that even in the limit of infinitely small spatial discretization, the diagonal line is shorter than the two right-angle lines. Taking any small section of the diagonal line and replacing it with the sides of a triangle necessarily makes the path longer. In other words, the diagonal line is an essentially different entity from two straight lines, which is a weighty argument against a quantized/discretized space (the importance of the spatial property that allows travel along any angle is also considered later under Couplings). Below the first 2 images all have path length equal to two lines, while the 3rd image is a shorter diagonal line.

-- --- / -| | / -| vs ---| vs / -| | / | | /It is odd that adding forces at right angles leads to a larger sum of forces than in a straight line:

| | F1 | \ \ F2 \ / / F3 / F2+F3 >= F1 although both {F2,F3} <= F1

When things fall in gravity, they feel no acceleration despite being accelerated in our frame. When things are on the ground, they feel an acceleration despite not accelerating in our frame. Things will fall in parabolas with constant distance between them if dropped at the same time, but increase distance if dropped at different times [f4], though they will land with the same dt as they were dropped. When an elevator moves up/down at constant velocity, the occupants inside still feel the same acceleration g as on the ground outside.

How do fields propagate in space? What does it mean to move in space? We have to posit 3 dimensions to proceed, and then a notion of distance becomes useful. What is a distance? It is something we could measure in an inertial frame using a light pulse time or a "measuring stick" (ie bound light of characteristic wavelength * number of waves = length). The existence of measuring sticks and propagating light pulses is evident in our observations. What does this require of space? First, it means space cannot be infinite and continuous. If it were, there would be no reference point for any size scale - with same physics laws independent of an absolute scale (infinitely continuous space - no handles/slippery/arbitrary) we should see atoms the size of a galaxy or galaxies the size of an atom, but we do not. Atoms and the associated physics have a characteristic length scale. Even if we assume atoms and galaxies are essentially the same structures just with longer time/space distances and similarly adjusted evolution, there is no way to define anything other than uniform filling of all space (not even a fractal structure) without a notion of some origin. Even an external 'god' could not "get a handle" on an infinitely continuous space. In order to escape from the infinite symmetry of an infinite system [f5], we must introduce at least an origin along with a non-uniform matter distribution, both of which require referencing to the boundaries of a finite system (in drawing a Euclidean 'infinite plane' we tacitly make the system finite when choosing to place the origin in the center of our picture/plot). Further, assuming that atoms and galaxies operate under essentially different physics (as opposed to a fractal system) - supported in part by the discrete force regimes (rather than continuous transitions, say from e-field to g-field) - we must have the notion of an *absolute* length scale that determines in what ways physics applies to a given object. This reference is absolute, and may be based on the system's finite boundaries, or may be an explicit discretization of space. One such constant is the speed of light, which seems to hold in all our experiments. A discrete space seems difficult to reconcile with the observed accuracy and usefulness of analytic solutions, but a continuous space in which such solutions are natural has serious difficulties describing interacting/changing/evolving systems because how to continuously change from one solution to another is not clear (recall the notes on the "real number line/dust"). For a small enough discretization, the two ideas become indistinguishable, but even then there seems no natural guideline as to how to discretize space - in cubes/pyramids/circles? Radiation (light) propagates in spheres, which is resistant to discretization. And we seem to be able to send/receive light to/from any angle equally well, based on some of our most accurate interferometric experiments (Michelson and others). Of course I cannot resolve this now [f6], but at least I will claim an absolute length scale, and thus a finitely-bounded system to reference that to, and thus an absolute origin within this system. A non-uniform matter distribution will also require differences in dimensions, or an absolute orthogonal basis set, identifiable from the boundaries if not within the system.

With a space thus set up, we ask: how can motion be propagated from one instant to another? The physics of the space is defined by the length constant - speed of light - and thus all possible exchanges, evolution, or movements take place at precisely the speed of light. The basic physical process of propagation or exchange - how information gets from one point in space to a neighboring point (or, how a point learns about its neighbors - as required for a local theory (only dependent on neighbors - which are defined by the absolute length scale)) is defined in spacetime and takes place at the speed of light. A moving atom must be viewed as some number of light-like entities bound together for a net zero momentum (which leads to measured mass - note that while 1 photon is massless, 2 oppositely emitted photons are massive [see Reflections on Relativity] and can be interconverted into mass), which can either move in a bound state and evolve (perform computations/timekeeping) or move in a light-like state and not evolve (described as time dilation if we posit time as an actual dimension - which I do not, viewing it instead as a consequence of a self-overwriting process). Then, if we take an instantaneous "snapshot" of a system with moving parts - the only information that is preserved from one moment of time to the next - the moving parts look essentially different from stationary parts even though the snapshot is static. The difference can be visualized with a simple 2-photon particle:

* stationary particle ~~> 2-photon view <~~Now accelerate to speed of light towards the right, means the right-traveling photon is blue-shifted to E=mc^2 and the left-traveling photon is red-shifted to zero:

* /\/\/> <-----Yet in the particle's inertial frame the two photons are still equal-energy (as in the first image) and the world appears to be moving towards it. So a moving particle will have unequal photon energy distributions, making it more light-like in a given direction and less time-like in our rest frame. This smoothly turns the particle into a photon when it is accelerated to the speed of light (this is later shown to not be accurate, as a fast particle remains essentially different from a photon).

~~> ~~> <~~External photon accelerates the particle by adding energy to the right-moving photon and subtracting energy from the left-moving photon. In particle reference frame of course both photons remain equal energy.

E+ ~~/\/\> E- <~~/\/\Field gradient accelerates the particle (blue shifted right edge and red shifted left edge - in particle reference frame of equal-wavelength this translates to a continuous acceleration to the right)

How does the particle know to accelerate, ie that it is in an energy gradient - how does it explore its surroundings and how far does it look? This is the proposed exchange process above, since particle 'motion' is explained in terms of fields. A potential mechanism for this process is a field 'relaxation' - the field takes on the average-like values of its (characteristic length scale) neighbors (Huygens' principle and the wave equation) at the characteristic time, leading to the constant speed of light. The electric field in its essence must be related to Coulomb repulsion/attraction more so than to light. Light is the propagation of information in the field, but this only occurs because the change in repulsion must be propagated "to the rest of the universe" after motion of a given electron, and this propagation can only take place at the speed of light, leading to Maxwell's laws. We can try attaching clocks to two charged particles, moving them apart, and attaching both to heavy masses so they cannot repel each other. Then at t=1 on particle 2s clock, the particle 2 is released while 1 continues to be attached to a heavy mass. Particle 1 will not know about the reduced repulsion from particle 2 until a light propagation time (say 1s) at t=2. Saying that the two instances must be simultaneous leads to the "simultaneity plane" notion of special relativity, but here no time axis is required. Forces are not conserved in a global sense:

t 0 1 2 3 4 -------\___ F on particle 1 -----\_____ F on particle 2

Note here, being attached to a large mass (leading to lower spatial, light-like motion) has led to an increased time delay (more evolution, time-like motion) compared to when particle 1 vs particle 2 believes particle 2 was released, so in a sense particle 1's clock runs fast (but really it is an offset, not a rate change - so this does not contradict Einstein's gravitational time slow down near large masses). Finally, consider the exchange process (it takes place locally at every (or every discrete) point in the universe) and the speed of light as a limit on the computational speed (or evolution) of the universe - this is an absolute scale so even to a 'god' the universe's future is not seen until it is actually attained. Namely, the fastest computation we can carry out must be in our rest frame - if we accelerate the computer we can only slow down its computation rate relative to our rest frame. It may be possible to compute faster by sending the problem to a computer in orbit whose clock is faster due to reduced gravity - but this is just saying that our rest frame isn't that inertial. It is possible either to do the computation fast, or to transmit it fast, but not both. This leads to the conclusion that computation is just a series of transmissions, or particularly directed information exchanges. Small computers will always outperform large computers although both are possible, and small-scale phenomena *must* evolve quickly while large scale can have a permanence. [f7] This explains the otherwise mysterious relation between length and time scales seen in all fields of science.

Space is a key starting point for all physical theories I can think of. Ideally I could posit a theory that explains space, but this seems still far in the future (see [From Zero to Infinity]). At least I can say what we know about space: space is the mechanism which constrains the relationships between points - it makes it impossible to move away from one point while staying close to another; it limits the number of ways to go around a point; it limits the number of mutually equidistant points to 4 (implying 3 dimensions); it makes a spherical shell a fully-enclosing boundary; it makes it so that only 3 orthogonal thrusters on a rocket can operate without influencing each other; it makes it so that a point source of light leads to a spherically expanding wave (Huygen's principle). For now I take this much as a given. The theory of relativity states that x^2+y^2+z^2-c^2*t^2=0 for light. Now what is t? I say t is defined by light-like phenomena, or say the number of bounces of light in a box of length L:

| ~~> | Box of length L with mirrors at either end | <~~ |In the above box, say n is the number of photon bounces, then time t=n*L/c (c is the speed of light). Then, x^2+y^2+z^2-c^2(nL/c)^2=0 -> x^2+y^2+z^2=(nL)^2 or R^2=(nL)^2 or R=nL where R is the radius of a sphere wave emitted from the origin. This radius is equal to the total distance traveled by the light inside the box.

In other words, light always travels at c, whether it is unbounded (sent to probe a distance) or bounded inside a clock. Time is another human construct that is not fundamental. If I measure the time it takes a light pulse to reflect from a distant object, I am really comparing the distance to object and back vs distance back-and-forth within my "clock" and the two must be equal. Then, relativity really says: space is homogeneous and isotropic; information travels at the speed of light; the speed of light is a constant. Information here is that which does not evolve in time. A "photon" cannot change in transit since it travels at the speed of light. If a spaceship travels at nearly the speed of light from here to Mars it will barely change (its travel will be information-like) while our normal slow spaceships will change a lot during the journey (ie crew aging). To further show that time is a construct, note the effect of temperature on evolution. Cold objects will experience less time, due to slow interaction and evolution, compared to warm objects. A winter-hibernating organism, from its point of view, really time-travels to the future by slowing its clock for a while. Our scientific reference clocks are kept at constant temperature, why should we expect that there is an underlying absolute time? Since real clocks are not "light clocks" but use electrons and atoms, we must reason that electrons and atoms also behave like "bouncing light", moving back and forth at speed of light c. It would be more elegant to suppose a circular motion of a light-like entity at speed c back and forth, for an electron, giving rise to a natural notion of spin (and its associated 'quantum' properties). But if all matter is light-like, how can it move much slower than speed of light or remain stationary? This is due to non-coherent motion. Coherent motion preserves system information and moves it as a plane wave at speed of light. Non-coherent motion allows system interaction (evolution/computation) but impedes net motion of the system so it is less than speed of light. Fast systems evolve more slowly (moving clocks tick slower).

As analogy, consider a photon inside a mirror tunnel:----------- /\ /\ /\ / \/ \/ -----------

If the photon takes on near-90deg reflections in the above picture, its motion towards the right will get closer and closer to zero, even while it still moves at the speed of light up and down. So its effective velocity to the right can be set to anything from 0 to c by this mechanism. What about reflections in the direction of motion? It is not enough to have length contraction. It must be that contraction is dependent on light direction: length seen by the photon traveling to the right is longer than that traveling to the left, so a light bouncing back and forth actually travels to the right relative to stationary observer, at a speed v

| ~~> | stationary vs | ~~> | moving to the right | <~~ | | <~~ |This still requires a definition of v though to determine amount of contraction - so this is not yet self-consistent. However I believe such direction-dependent space "warp" is possible and can explain gravity and inertia (I will later claim this to be due to lag between different interacting field types).

What is meant by "local fields" is: there is no "action at a distance" without something that will transport the action through that distance (meaning this transport must be measurable at each point along the distance), and at each step the transport must satisfy the same laws regardless of what (if anything) is actually being propagated - these laws are physical truths that define the space at all times. This is similar to how finite difference codes work - each point knows only about its nearest neighbors, not the rest of the system nor the boundary conditions. While *we* see the "whole picture", each point sees only its local vicinity and reacts accordingly [f8]. The evolution of whatever we simulate in such a system is caused by local-acting fields, and takes place at finite speed. The laws apply equally to each point, so the evolution is a function of the boundary conditions. Physically this is the most coherent way to represent reality, since every experiment shows that an action will first affect the closer (in space) parameters and then the farther (in space) ones, and laws must apply at all scales so therefore the small-scale laws can only be a function of nearby (local) effects.

We can apply this to diffraction. One might ask, how can an observer "see" diffracted light? In the diagram below, the observer is not in a straight-line path to the light source. If the slit was made bigger, the observer would *not* see the light, but with the slit small enough the observer does see the light.

. Light source --- --- Slit / / * Observer

The local field view says the observer "sees" only the local field, and the origin of that field's disturbance as "diffracted light" is all human interpretation nonsense, like energy and numbers. The local field oscillated because its neighbor oscillatied, in turn because its neighbor oscillated, and so on back to the source we interpret as a light bulb. This is why Huygens' principle applies to light - at each point, information about the earlier 'source' is lost, only neighbor information remains (and only fleetingly). This explains why we see light from a slit but not from a corner: not because "light that makes it through the slit diffracts" but becase "light that is blocked by the slit causes loss of information about wave directionality".

How can we describe our perception of time, and how might it affect the theories and interpretations we provide? One might argue with some degree of certainty that our time perception has evolved to help us succeed in our survival tasks - chasing prey, reacting to weather changes, even capability to plan ahead for the near and far future. Probably the most direct physical basis for our time perception is chasing/evading other animals, as our brain speed is just fast enough for running on a difficult terrain and it seems even robots with faster "brains" would not run faster due to physical/force/stability constraints (as well as the nature of the terrain we evolved for, ie cars of course can be faster). At the same time, our brains clearly operate on multiple time scales, in a physical sense. We can sense kHz with our ears, perceive ~100 Hz with our eyes, and the brain's in-cell processing probably runs in the MHz-GHz range. Meanwhile our overall experience is best described in terms of seconds or Hz, about the time over which large animals move. In modern worldview, we have made the assumption that time is something universally present, going off our perception of time, thus using the 'second' as a unit, and defining regular physical phenomena in terms of that. I would argue that such an extension is not necessarily accurate. Instead it is possible that different objects experience time on different scales. Our only way to interpret time is from how we experience it - it seems mostly constant, and is always present. This is because our brain has an "internal clock" which we reference, and such a time-based view of reality is the only one we experience. Obviously clocks are seen to keep time. My suggestion is that time "exists" or "is carried forward" only in energy-dissipating systems like clocks, or our brains. Then time is a consequence of energy exchange, and objects that do not exchange energy/interact persist "indefinitely" yet do not feel time passing. For instance quantum theory suggests photons interact with individual atoms. I argue that then photons must provide their own "time basis" or "clock", for if they did not they would always be coherent, like sound waves or ripples in a pond - it is not possible to have a "blackbody" sound source since what it emits is always coherent! Then interacting atoms (like atoms undergoing a collision) will form a mutual "clock" to provide a timescale to exchange information. One thing left unanswered is how far-away objects interact (for instance Coulomb repulsion takes place over long time+distance) - then again we define "far" as "long time to travel" so when clocks change the concept might be less useful. My second suggestion might reconcile the idea of time as a continuum (Zeno's paradox). I claim that with different time scales of interaction different physical laws apply, in a way similar to a series summation that is convergent. Thus when claiming there is an "infinite amount" of time in between any two moments (which defines the continuum) we are tempted to say "impossible - then anything could happen within that time!". But the changing nature of the physical laws with time scale ensures that what can happen in a "short" time is consistently less than what happens in a "long" time, even though both intervals are in a sense "infinite".

Time is something kept by clocks. It is curious that we are able to make devices that accumulate the number of repeats of a cyclic phenomenon, though in principle this is similar to the idea of memory, and causality/state is conserved until acted upon (exchanged info with). However, time is arbitrarily defined by humans as a certain number of tickes of a specific clock - maintained at just the right conditions. If we freeze or heat up a common clock (for instance), it will cease to keep time per our standard definition. In other words, in order to keep time the clock mechanism must be specially designed and placed in an appropriate environment. Do systems that are not "clocks" experience time? My view is no - under information theory I view time as the rate [f9] at which information is exchanged between particles. Thus cold systems which barely change are argued to have in fact a "slower" clock than hot systems that rapidly exchange information. For instance metals that are hot give off light and can be deformed under load. Then metals at room temperature should give off correspondingly less light and take a correspondingly longer time to deform under the same load (creep), because their clock has been slowed down. At high temperatures objects will decompose - a much faster version of similar decomposition at room temperature, because of a faster local clock. We may consider light bulbs as an example - if the energy put in to heat up a filament to incandescence temperatures were instead used to accelerate (and keep accelerating) the filament, would it give off similar radiation? Mathematically, time is the object that moves the world from one information state to another, in a continuous manner. As reasoned earlier, all numbers are discrete - time is a transformation that can step from one number to the next one continuously, something that we have not comprehended in the standard framework. Physically our present theories only make sense because we can simulate the effect of time by employing time itself, since we live in a time-present world. The theories themselves, and anything they purport to do, are only "snapshots" of what they represent. They rely on the ability of beings like us in a world where time operates to dynamically implement their action using a real physical system set up in a specific way, like our brain or a computer. That is the only way a theory can give an answer or be of use - through interaction with a real (dynamic) physical system that is inherently time-based. I might consider the notion of time as computing power, an essential ingredient of our universe – just writing some equations won’t magically create a world, the best we can do is simulate the equations but this is using our real time and would be impossible in a non-time-evolving universe! If a theory could be written directly in terms of its dynamic variables, the answers would be self-evident as the theory itself evolves into the desired structure. This is what is done with traditional experiments.

Why should time run in a given direction, and why are there two directions (forward and backward)? Regardless of how time "runs", a self-consistent theory must ensure the matching of system parameters throughout time (every cause must have a consequence, every consequence must have a cause - the two are mutual/symmetric/inseparable). Extending a view beyond photons, how does an atom "know" of another atom in its vicinity - in order to carry out gravitational attraction/photon exchange/Coulomb interactions? The information content of just atom locations is vast, but including in that all the "connections" between atoms makes it unimaginably larger (consider that we can see stars). One possibility to form this is that matter continuously emanates "feeler" waves that somehow inform the original particle of the location of other particles (a more basic view is that matter *is* these waves, since we never "actually observe" matter). This poses similar difficulties as photon waves - how does it decide where/when to collapse? How can it store so much information and later discard it? How often/how strongly does matter emit waves and why (what is the driving force)? [f10]

Time is based on light-like interactions so relative to my rest frame, if something travels fast it cannot evolve fast - evolution becomes external (moving forward in space) instead of internal (calculations/interactions within the system). [f11] A decaying particle in lab frame decays in (say) 1s, while same particle accelerated relative to lab decays in 10s, both in lab time (this phenomenon is experimentally verified in muon decay). But the accelerated particle, in addition to decaying (internal restructuring) travels for 10s (external restructuring).

Consider an oscillator (t=lab time, x=lab distance)t -------> -------> t -------> -------> t -------> -------> E 1 2 3 2 1 2 3 2 1 E 1 1 1 1 1 1 1 1 1 E 1 1 2 2 3 3 2 2 1 x 0 0 0 0 0 0 0 0 0 x 0 2 4 6 8 . . . . x 0 1 2 3 4 5 6 7 8 at rest at speed of light at fast speed < cSo the traveling oscillator slows down its oscillation in E relative to the lab clock. Now compare its clock (number of oscillations of E) with lab:

- By sending light pulses origin-to-origin
- By having a detector at every x of lab space that records lab time on receipt of nearby pulse
- By sending a "calculation command" to a "traveling computer" which does the calculation and sends back the result. The calculation takes a constant time t in lab frame computer

t_lab 0 1 2 t_lab 0 1 2 t_lab 0 1 2 x_lab 0 0 0 x_lab 0 1 2 x_lab 0 0.5 1 t_sw 0 1 2 t_sw 0 0 0 t_sw 0 0.5 1 at rest at c at fast vThus the speed of light sets the "arrow of time" and the speed of time - consider evolution of a "big" system (galaxy) where the v=c stopwatch acts as a clock in its own right. Systems can't travel faster than speed of light or evolve faster than speed of time (also speed of light, as applied to space). Fast travel takes kinetic energy and fast evolution takes potential (thermal, internal) energy - the potential aspect is consistent with gravitational time dilation (far from heavy mass = high potential energy). Cold objects, gravitationally trapped objects, and fast objects all cannot evolve. Stationary objects obviously cannot travel. So there is a minimum (0) and maximum (c) for both travel and evolution (and the *combination* of the two is constant [f12]).

Why does a car need a differential? Because one wheel travels a longer path than the other - sure. In fact the car's motion around a turn can be decomposed into a linear motion and a revolution. Going 360deg around a turn of any radius eventually requires the car to complete 1 revolution. The path length difference there is in the radius of the tires - R - and difference in circumference 2*pi*R-(2*pi*(R+delta)) is independent of actual radius R. I can try to turn a toy car - with a single axle no differential - in place, and feel the tires dragging on the ground as they are linked. The amount of drag is per revolution - per degree turned - so already we have the nature of the circle defined as changing by a constant amount in circumference (2*pi) with a change in radius - as explanative of the use of the differential. Are there other topological problems where turning/revolution is of interest? While hiking a loop trail I realized that to return to where I started I had to undertake a (2*pi) revolution (if I never go backward or cross over my path). With any such path, if I approach it from the inside and go left, I will be going counter-clockwise along any outer side, and vice versa if I go right; or left from the outside. Any such path will fully enclose a 2D region. In reality I can go backwards or over my own path, but I believe this will cause a revolution in spacetime of 2*pi similar to the 2D case. The direction of momentum will have to rotate by 2*pi to return to the same point in space; or perhaps alternatively a full 2*pi rotation will return to same spacetime, which is impossible, so return only to same space is achieved.

Consider next the turning of a rocket as it flies near a planet:---- ||||| / O | O | |In case 1, the rocket revolves by a certain angle such that its nose always points in its direction of travel. In case 2, the rocket always faces upwards despite its direction of travel changing. The rocket is assumed to not fire thrusters or use reaction wheels, traveling in a straight line far away from the gravitational body, and moving solely in freefall. Is case 1 or case 2 physically correct? General relativity predicts the bending of light as gravity is an actual warping of space, so there is at least some trend towards case 1. Newtonian gravity might or might not predict such a rotation - any torque must be due to non-equal gravitational attraction of multiple points on the rocket at different levels of gravitational potential. This could be modeled as 2 points rigidly fixed to each other and solved fairly straightforwardly, giving minimal revolution and favoring case 2. Actual measured effects may have interesting implications on space and validity of Newton's gravitation [case 2 is suggested; see gravity-gradient stabilization].

We sometimes talk about infinite objects - infinite lines, planes, even space. But we have no reason to assume such infinities exist - as all we observe is in some way finite and bounded. When looking at the stars it seems a miraculous observation of an infinite space [f13], yet what we really "see" is the interactions right on the cornea (already within the eye) - the world is a lot more claustrophobic than typically experienced (a consequence of local interaction requirement). In looking out over a large valley or expanse of land, we interpret the scene as seeing distant objects through a transparent, airy void of space. Whereas really we are getting local signals from the eyeball of e-field oscillations in the eyeball's vicinity, which we then interpret as relating to distant objects (and typically even as the very essence of these objects - what I see is what *is*). And what we interpret as the empty/void/vacuum space really is a substantial element (making up vast majority of the galaxy) in which countless fields and particles interact and evolve. Space is the medium in which evolution takes place - in the absence of g-fields and other influences clocks tick fastest (per general relativity). Evolution takes place sequentially and locally, which ensures constant conservation law applicability. This is the issue with "sci-fi" topics like teleportation: getting an object from point A to point B requires that all appropriate quantities must be conserved, so how do we deal with (say) the air molecules that are currently in the volume about to be occupied by the object (or other subatomic particles), and how do we deal with fields and potential energy at A vs B, and how do we deal with relative motion, and how do we specify just which atom of the object goes to exactly which location at B? All of this must be answered to move the object and the only way we know how is to let physics do this naturally - the interactions between all atoms of the object constantly localize that object in space, so pulling/pushing the object has the desired consequences and doesn't just tear it apart; energy is conserved in these continuous motions and all outside particles (air molecules) are appropriately handled. This is why the only method we have for getting from point A to B involves continuous motion from A to B along all intermediate points - no teleportation - as such movement ensures conservation of information and all other applicable laws (energy) as required by space evolution [f14].

Time evolution itself is an interesting point - typically physics theories introduce a time axis and go from there, as if time can be treated similarly to spatial axes. I think we have no evidence for such an approach - in fact evidence against it - as there are no signs of time travel being a possibility, and no coherent theory that can act in an entropy-reducing world. Having time as an axis of course allows looking both forward and back in time - enabling use of theory not just for description but for prediction/explanation and ultimately control and application to human needs. But a time axis means the world is a 4D object - like a sculpture - of which we see subsequent 3D slices, like a 2D world "evolving" by tracing planes through a 3D structure. This object is static and fully defined, so there is no need nor grounding point/reference for a concept of "now", or for a notion of time evolution. We have no reason/justification to exist or experience time, because this time-axis world is already complete. Seeing as we are physical beings, our experiences and raw feelings must be a feature of physics as much as gravity and forces - the notions of pain/pleasure/color and other experiences must have a physical basis, as must the notion of "now" and living "in the moment". So I think the world has no time axis but instead evolves on a local basis, so time travel is not possible, and so "even god" doesn't know what the world will eventually look like - the only way to find out is to wait for the evolution to take place. Furthermore, this evolution takes place at a finite rate [f15] - the "speed of time" - for we experience each moment in progression and as taking measurable time, and not all at once/in a flash. As mentioned earlier, the speed of time is in fact what sets the constant speed of light/gravity/information, as it is the fastest allowed evolution for a given spatial (local) interval. All these limits can be seen as a limit on the maximum computational power of a computer we can build - the larger it is, the longer light will take to communicate information [f16], and the smaller it is the fewer information-exchange interactions can take place within it in a given time. In a practical sense we are already reaching this limit with modern CPU chips (though the theoretical limit, by applying the most basic field evolution to our goals, is astronomical; the field evolution still takes place whether or not we call it a computer or it is useful to us). Atoms themselves (and their constituents) - as basic physical particles - must operate *at* this limit, the "universal computer", but carrying out computations not useful to us (ie "random"). This sets the speed of evolution of atomic systems, and thus the speed of (atomic) clocks that we use, thus the rate we use to measure time, thus the speed of time.

From the view of Einstein's relativity, there is no absolute universal time, or in other words there is no clock I can point to and say "that's the *correct* time", the best I can do in this sense is point to a clock that is in my reference frame (next to me/moving with me - this defines the 'speed of time'). What of the "universal computer"? If I make a computer simulation of a slice of the universe, I impose my own absolute clock: the simulation clock, the rate at which the simulation changes from one state to another. But the structure of the simulation following relativity is such that its behavior (from its own view) is independent of the simulation clock; this is the elegance in the relativistic equations. I could simulate a moving space in which the clocks inside the simulation run slow with respect to the simulation clock, or I could simulate another space in which the clocks inside the simulation run at the same speed as the simulation clock. When I simulate a clock mechanism and track physical events inside the simulation by the time read on this clock also inside the simulation, I get consistent results no matter at what rate I run the external "absolute" simulation clock. From the view of the simulation, there is no "absolute" simulation clock because it makes no difference to any within-simulation measurements. This means that time must relate to space, because if the two were separate then one could be used to track changes in the other and establish the presence of an "absolute" standard. Because we only track time by spatial motion and vice versa, the relation between the two remains consistent regardless of how the "universe computer" might work.

Consider a physical action - the spreading of energy - as an optimization problem/computation. If it is possible to send information back in time, then the spreading can find optima in a much more immediate way than if time only goes forward. In other words, the computation has extra information available to it on which choices to make before making those choices. I think we never observe such super-temporal optimization in nature so the arrow of time is real. Not to mention that sending information back in time can lead to infinite computing power as pointed out above. Consider finding a way through a maze:

----| O | | | | |-| |One path leads out, the other doesn't. Not knowing beforehand, I would have to try one, then return back (carrying that information with me to the starting point - same requirement if I use a light pulse/robot to check the path for me) and try the other. But with a chance to send info back in time, I can send a message to not go into the first path as it doesn't lead anywhere. Beyond the paradox that once I don't go into that path I couldn't have sent the message in the first place, this means I can navigate any maze in an apparently optimal way, without ever "trying" any non-optimal routes as whenever I find myself on a non-optimal one I send back a message to not go there. This seems physically impossible and not observed in nature. Yet there is no singular moment of time to call now: the experience of time is dependent on the conscious system experiencing it (ie for us it is in ~Hz, for an atomic clock much faster). Why do I feel time moving forward? All my conscious experiences could be linked to each other by interactions with delayed/memory storage - a temporal vs physical interaction, making my experience able to feel as if I am evolving over time. Logic says I shouldn't be able to tell whether such experiences are "really" all taking place in parallel or even in arbitrary order (since any time I might find myself, I will only have memories of my immediate past as happening just recently), but I have a vivid strong experience of one moment leading to another, so at least for now it is tempting to think of time as the evolution of the 3D spatial world.

Based on above, I might propose some definitions:

- Information: *spatially* bounded, *temporally* conserved -> requires space and time
- Space: a construct such that a spherical shell around a volume fully constrains the exchange of information between inside+outside
- Time: the state of information always exchanges with other spatially nearby information, the closer in space the closer in time
- Computational limit/speed of time: information always changes at a constant rate, the speed of time. This change may be distributed between motion in space or evolution in time, seen at either extreme as the stationary clock rate of a light-speed-traveling clock and the speed-of-time clock rate of a stationary clock. This can be viewed as a limit on the computing abilities of a system; reaching this limit means one is using the full power of the "universal computer" towards some useful purpose.
- Encapsulation: as far as an outside observer is concerned, its interaction with a system is fully defined by what happens at the encapsulating 2D boundary/sheet/cage, the 3D volume inside is felt only indirectly [f17] - this is explained as any information which does not exit the encapsulating boundary keeps contributing to computation/evolution taking place within the boundary

[f1] it will be suggested later that information is accurately represented by mass-energy given our current level of physics knowledge

[f2] coupled actions are also considered in Information Theory under Couplings and Gearings

[f3] this is addressed later with a field theory interpretation of physics

[f4] this does not bode well for the trope of the hero jumping out of the window to save a falling person - at least until air drag is significant the distance between the two will steadily increase

[f5] then again, maybe there is no such thing as a pure 'infinite' space in the way that we imagine, maybe a 'real' infinite space has just the properties that we observe - see discussion in Ch2

[f6] my view has oscillated between discrete/continuous since high school - initially I was confident space must be discrete because it is impossible to continuously move on a real number line (Zeno's paradox), then I thought that since calculus works in real systems space could be continuous if time is also continuous, now I am leaning more towards the discrete side but with the reservations noted above

[f7] Then, an *optimal* computer may be built such that it is fractal-like, with fast computations made by small chips combined by larger chips which handle more information at once, and larger chips which lead to outputs of motion or electricity. Such fractal systems can be found all around us, in lifeforms and computers and turbulence.

[f8] after all, what we see is really the insides of our retinas, we just perceive it to be the outside world because each point in space has faithfully propagated the photon information along into our eyes

[f9] rate is of course a circular reference. I mean relative rate to some other things, which is how we define time at present

[f10] this is taken as another argument for the field view. There are no collapsing wavefunctions but rather consistent information storage in the field. The creation of static 1/r^2 potentials in the field satisfies the role of the "feeler" waves, and is discussed in future chapters.

[f11] this already suggests a constant computation rate rule - information either contributes to a calculation within the system or is transmitted outside the system but it cannot do both at once

[f12] that is to say, the universal computer speed is a constant. To carry out computations that are useful to us, ourselves being within the universal computer, we must dissipate energy. Without energy dissipation the combination of travel and evolution can be seemingly brought to zero, but this is because the constant time evolution that takes place cannot be used by us without dissipation (it materializes as an object *not* changing).

[f13] see Olbers' paradox: if the universe is infinite why isn't every point of the sky covered in starlight?

[f14] a solid object contains a tremendous amount of information and moving it means letting the field handle all the necessary information transfers to get it from one point to the next; in this light teleportation just does not seem feasible - if we could achieve it, it would be indistinguishable from actually moving the object in some way, in this sense an elevator is a 'vertical teleportation' device - and it really does work!

[f15] but a rate with regard to what? the rate of time progression is an essential physical constant, like space, which cannot be defined beyond "it just is"

[f16] instant information exchange leads to infinite computational power - since exchange must be instant even inside the transistor/gates thus there can be no rate-limiting step. Infinite computing is also possible if a time loop can be made (transfer of data to an earlier time/time travel).

[f17] some semblance is here to the Holographic Principle

«« 2. Observers, Objects, and Existence 3. Space and Time4. Fields and Particles»» Contents