Deterministic World

Part 1. Physics

4. Fields and Particles

By conservation of information, we reason that when a photon is emitted/received, since the photon is a new particle (ie it is created/destroyed in the process) all information that the photon contains must be equally exchanged with the receiver and the sender. Based on current knowledge we expect that when a photon is emitted the sender will experience a loss of energy dE and a recoil momentum dP. Therefore this information must define a photon (it seems hard to accept this is a full definition though, since there is nothing which wtates that it is indeed a photon). When a photon is received, the receiver observes a gain in energy dE and a recoil dP. When a photon is reflected (solar sail) the 2dP but not dE are observed.

t=0  O     .  particle at left is energetic and about to emit a photon
t=1 <.     .  momentum not conserved if photon ignored
t=2 <.     O> particle at right gains dP and dE

In the following argument we make strong use of the time dependence of the emission/absorption and associated momentum change, namely that both happen simultaneously. One view might be that the photons are stationary in time, which has the advantages of conserved momentum and speed of light limit. However this is not supported by experiment. Consider the following:

t	Sender		Receiver
0	 ~~>					Send photon 1
0.5			~~>
1	 ~~>		  <~~		Send photon 2
1.5		<~~	~~>
2	 ()			  <~~		Receive photon 1 reflection
2.5		<~~
3	 ()						Receive photon 2 reflection

The receiver reflects photons almost immediately (local dt=0) but the sender measures a dt=2 delay. One might argue that the sender's clock is slower but we see that both sender and receiver measure dt=1 between the two photons, which is self-contradictory. Therefore we must conclude that photons indeed *travel through time* (like particles) and can store energy/momentum over time. However this is also problematic; consider the 2-slit experiment:

Emitter	  Slits		Screen
			|		 /|
			=		 \|
>  ~~>		|		--|
			=		 /|
			|		 \|
Tracking the time progression, at t=0 the emitter emits a photon (detectable locally at the emitter by the associated loss of energy).
t=-1	t=0
>		>~~>
Based on the recoil momentum of the emitter, we know which way the photon went. This has to work to match classical observations. Thus we can tell which slit it is going towards
Emitter	  Slits		Screen
			|:		 /|
		~~>	=		 \|
>  ~~>		|	~~>	--|
:			=		 /|
			|		 \|
 [dP 0

This raises a few questions. When the photon is emitted (t=0) does it already "know" where it will be absorbed? Is the interference pattern a snapshot - if the slits are removed during 0

Emitter	  Slits		Screen
			|	 /|	 /|
			=	 -|	 \|
>  ~~>		|	 :|	--|
			=	 -|	 /|
			|	 \|	 \|
			<- D -> -->

Thus the angle at which the photon may be emitted is also dependent on D. The angle is apparent in the dP measured by the slit. However the dP is measured at t=1 while the photon doesn't "decide" until t=2, and these measurements end up consistent with each other. This forces us to accept that photons not only travel in time but can provide information about the future, that is at t=1 we already have some data on what D might be, although the photon does not reach it until t=2. This is not as uncouth as it first sounds, for it is similarly cavalier to claim that the photon emission can occur at all without a guarantee of its absorption. At this point we need to find a reasonable explanation for how this might work. {jumping ahead, all these unphysical things are easily resolved by a field model. I keep these arguments to show how a contemporary quantum wavefunction and wave-particle duality don't fit in with information and symmetry principles}

Another challenge with the interference pattern is how it is formed. The photon is seen to change momentum between the receiver and the emitter (even by classical means) - this is most reasonably transferred to the slits - and based on the momentum with which it was emitted (based on emitter recoil) we can tell which slit it must have interacted with. Yet the interference forces us to accept there may have been a wave-like pattern. This wave is troubling:

We leave these issues for now, to take another look at photon interactions over time. Based on the above, all that we observe externally are not photons themselves but only dE and dP (ie changes in variables, or an unbalance in an underlying symmetry).
[repeat above figure]
t=0  O     .  particle at left is energetic and about to emit a photon
t=1 <.     .  momentum not conserved if photon ignored
t=2 <.     O> particle at right gains dP and dE

To the left atom, the emission of a photon is equivalent to the absorption of a negative energy photon. To the right atom, the absorption of a [+] photon is equivalent to the emission of a [-] photon. Could it be that [+] photons travel forward in time while [-] photons travel backward in time? In this view, the emission of [+] photons can be viewed as a stimulated emission process (like in a laser) caused by [-] photons, thus ensuring even at t=0 that the photon is emitted in the "right" direction. The photons travel along time-space curves thus changes in experimental setup during photon transfer are accounted for. The issue of the nature of interference pattern is not explained though.

This consideration suggests the view that particles may constantly emit [+] and [-] photons and exchange with others. In this case what we see as light is only an imbalance in the "photon field" leading to energy transfer. This is like air pressure vs wind. We might get an idea of the "photon pressure" by making a photon void (actually seen in the Casimir effect) or also by quantifying how bright a blackbody radiator is in empty space. [f1] Suppose [-] photons do arrive from the future. Then self-consistent calculations can be carried out using both [+] and [-] photons - ie *active* emitters and receivers. This is basically what transistor chips perform - they use power to emit photons (signals) but also have actively biased detectors to receive photons (emit [-] photons/signals) which allows the carrying out of computations as opposed to remaining in static equilibrium.

It is often tempting in physics and in the real world, to ask questions of the sort: which slit did the photon go through, which photon reached the detector, which atom emitted the photon; and in the more practical real world, which spoon did I get at the restaurant, which piece of sidewalk did I step on, which droplet fell on me in the rain. The question really leads to the more complicated "why": why did that occurrence happen over others? -since a "which" question cannot be answered without considering also why the other combinations did *not* come to pass. I argue that this question is ultimately asking for very detailed information about the world as a whole. Consider the process leading to the selection of a particular object as a hash function. This hash function is as complex as the real-world description of the process, and has tremendous information content that modifies an input (some initial conditions of the objects to be chosen) to an output (choice of an object). For most processes, the possibility of probing the hash function information through its outputs is impossible - there is not enough information output to answer "which one" questions without thoroughly analyzing all real world components. However for experiments that are smaller-scale and use ensembles and symmetry, this hash function may be obtained to some degree (where the use of symmetry obscures underlying features however). We note that the information that determines which object is observed is most easily interpreted as existing in the world and not associated with the object. This may provide a solution to the "information extravagance" of an expanding photon or particle wave. This also suggests how a group of objects, like atom(s), can exhibit interference: it is not a property of the atom but of the world. {this is the field view - information is stored in the field}

How can particles interact gravitationally? How does one particle know the location of others, which it is to attract? If we adopt the axiom of infinitely precise actions, then any 1 particle must be affected by all other particles in the universe. Experimentally we find that the effect is not very complex - the overall effect of gravity on 1 particle can be represented simply by a vector (as opposed to info about all surrounding particles) - the gravitational effect is additive and, for individual particles, acts like a hash function - blurring detailed information about the surrounding universe and yet dependent on it. To handle this scenario, we find it useful to define a "field" which is defined as such a vector at each point in space. This field is still dependent on all particles in the universe and it is not obvious whether a physical relevance exists (it does not "simplify the processing load of the universal computer" [f2]). Yet we can gain some insights from working with the idea of a gravitational field. A steady state field might look like this \/ (1/r^2) around one particle. Contributions from all particles are summed to get a universe field. What happens if a particle "disappears"? Three possibilities:

* Infinitely fast change \/ -> --
* Finite speed change \/ + /\ -> \-/ then \---/ till all flat (sharp boundary between central flat and gradient)
* Finite "graviton diffusion" like wavefunction \/ + /\ -> \/\/ then \-/--\-/ (smooth boundary between central flat and gradient)

If we are to believe general relativity, and the experimental findings of LIGO, then gravity does not travel infinitely fast, so probability 1 is ruled out. Probability 2 seems a bit clumsy, with a sharp discontinuity perpetuated and no consistent mathematical formulation (how does the wavefunction know to return to 0 after the wavefront? Since it is already 0 behind the wavefront there is no info available on how to handle the wavefront, ie how much to subtract). Probability 3 is one I like because it is similar to ideas of particle wavefunction spreading in QM - in this case the new positive wavefunction exists in all space immediately/infinitely but its spread occurs at finite speed - which causes the observed finite speed of gravity [f3]. Further, it suggests that the loss of a particle/mass will create a momentary "repulsive field" - our experience with explosives and E=mc^2 confirms this. This view suggests that the synchrony and localization of explosion will create a better repulsive field, and this is generally seen with nuclear weapons (high precision timed explosives help for radiological reasons as well, but under this theory yield will increase also due to the gravitational wave effect). Moving particles will emit a wave of the form --/|/-- to show their absence on left and presence on right (particle moves to the right). By taking the gradient at 0 as a velocity (or force?) we might recover concepts of inertia (and radial acceleration follows) as well as gravity.

Some wave-particle duality experiments, such as the 2-slit experiment, seem to require contradictory notions of wave and particle. Yet we might explain this with another view - electrons (say) are discrete particles, while photons are simply field oscillations; electrons, under appropriate circumstances, emit field oscillations, and under other appropriate circumstances absorb field oscillations (in fact these circumstances might be similar as suggested by stimulated emission/laser). We can then reduce the field oscillation amplitude at its source through some mechanism until the appropriate circumstances for absorption are rare enough that we claim to see single photons. Yet the field emissions occur throughout - if they lead to no observed absorption, or even more than one absorption (for a single emission/excitation) we have no way to verify this (energy is conserved overall so any macroscopically averaged measurements will not reveal the effect). This is elegant because it carries over into the radio/low frequency domain easily, and can be understood with Maxwell's laws, and does not require "collapse". Yet it is problematic to apply this view to entangled-photon experiments, where it is found that two independent photons ("field oscillations") have related properties *in terms of* absorption - since we claimed absorption would take place only under appropriate circumstances, we have to now claim that such circumstances also become entangled in the case of entangled photons, which seems unrealistic. However we have ignored an effect - if we adopt this view, the original photon that creates the entangled pair never goes away - its field keeps on expanding and can serve as a reference to "synchronize" absorption of the entangled pair so as to cause the observed relationships.

There are some results that may seem surprising on an intuitive level, which are well explained by the field model. The case of diffraction through a slit was already explained above.

A similar case is with a diffraction grating:
d | *  Detector and light source
   
______ Mirror
Above, detector does not see light from the light source. But now remove sections of the mirror to make a diffraction grating, and the detector sees light:
d | *  Detector and light source
   
-_-_-_ Mirror as diffraction grating
By *removing* metal on the mirror, we have *increased* the light on the detector, thus more with less. Again, this is explained as the removed metal causes loss of wave directionality information in the field, so the wave is free to keep spreading backwards toward the detector, whereas with a full mirror part of the reflected wave would have caused destructive interference in that direction.

My view on quantum is that perceived particle-like nature is not intrinsic to photons but to electrons of the measuring apparatus, while fields describe the photon's passage. Then a "random" process in the detector makes a number of electrons respond to an applied field. But this seems to be disproved by the "delayed quantum eraser" experiment. However if this view is correct, we might see that existing electrons that have a high propensity for absorbing/emitting photons (by this "random" process) will be likely to do so even in otherwise weak fields, but overall energy is conserved because the excess arrives from the detector itself.

High field/many photons
/\/\/\>> ___---- E increase after field applied takes minimal "boost" from random process
Low field/few photons
~~>      ___---- E increase by electron can only take place with a large "boost" from random process, so is a rare occurence (note E of electron is quantized)

We go further then and claim that electrons/particles can be represented by higher-dimensional fields where it is allowed to have "warped" or self-sustaining, localized waves as stable entities, and when such waves are de-stabilized (as in electron emission) they become large entities, made smaller by adding energy, and otherwise causing "electrons" to appear at a detector where an appropriate "random" process can take place. This suggests that with the right detector, non-conservation of particles can be demonstrated (this would require an energy input equivalent to the particle though). [f4]

Here I further explore what a world defined by fields might looks like, and how this can be reconciled with existing notions of particles and quantization. The most consistent evidence we have shows that light is a continuous field (e+m field) described by Maxwell's equations. I will then claim that light really is a field, and try to explain why we can see "single photons". The key is in the interaction of the light field with matter - and this is the only way we know of to tell whether light is "really there" - whether a fancy PMT, a camera, or our eyes, light has to interact with matter before we record an experimental result. For me the most telling sign of the field nature of light is that antennas - large physical structures - can radiate as well (or better) than "individual electrons" of atoms, so there is not something "mysterious" about light that requires a quantized transmitter. One oddity though is the emission of light by atoms themselves - atoms are measured as ~10^-9 m in size, while typical visible wavelengths are ~10^-6 m which is way bigger. How could this possibly be explained with photons? Like a grain of sand creating a 'balloon' out of nothing, it is just odd. In a field view, though, the radiation is very inefficient but can be simulated and realistically carried out, the same way that a tiny speaker can produce sound waves of wavelength a few orders of magnitude past its size. In the speaker, the efficiency of radiation decreases at longer wavelength, meaning if the speaker oscillates at low frequency it will draw very little energy and very little sound will be heard, despite a large speaker amplitude. At high frequencies, though, sound radiation is effective, so keeping the same amplitude will now take more energy, and a louder sound will be heard. So, if the speaker is given random impetus by some driving mechanism (thermal), it will tend to oscillate silently at low frequencies while quickly emitting the high frequencies. And while the 'sound' of the speaker is high-frequency, its actual vibration motion will be low-frequency dominated.

That is to say, the higher-energy terms have a shorter half-life as speaker oscillations as they quickly turn into air oscillations. This exactly parallels the ever-present observation that more energetic things happen more quickly. In fact I propose that "energy" just like "numbers" is not an elementary universe concept, but an emergent quantity of various field interactions. The universe doesn't care about energy, only we do. The universe conserves something more fundamental - the field values. This resolves one conceptual paradox in relativity: if I move towards some object, I can view myself as having some kinetic energy, or I can view the rest of the world as having some kinetic energy - the latter is certainly untenable because the energy would be immense but relativity cannot tell any difference between the two (and it doesn't have enough of a memory to remember that I accelerated). The field model states there is no "energy", only an energy difference - and this is exactly the same whether I am stationary in a moving world or vice versa. There is no "0 at infinity" boundary, there is just no energy - it so happens that the things we call "high energy" are such that they are smaller and move faster because of how field equations work, and what we call "low energy" is the opposite way. Nature does not "seek low energy states", but rather the "high energy" states also happen to be ones that rapidly radiate and will not be typically observed as stable. A classical display of prowess in energy balance is finding the blackbody radiation spectrum - usually inside a large metal container so that vacuum e+m modes can be evaluated and then quantized. I see two conceptual trapdoors here:

  1. It isn't the vacuum that radiates (vacuum transmits waves but doesn't create then out of nothing), it is the atoms of the blackbody. This is seen when a solid spherical blackbody (sun?) is used instead of the hollow box and seems to most easily reconcile with all our e+m field experience and explanations. The atoms individually should emit the blackbody spectrum, and QM makes no comment on this mechanism, leading instead to very abstract 'vacuum fluctuations' and 'zero point energy' which really isn't even an energy. In any case I say that atoms are the sources of the e+m radiation of the blackbody in my theory.
  2. The use of Tempreature, a statistical parameter that only applies to huge collections of particles in thermodynamic equilibrium. This will almost never be the case in situations we wish to describe - radiated light from a blackbody (into a colder space - by definition), two interacting particles, or motion of an individual particle. For this we might get away with a full distribution, but real conclusions can only be drawn from all individual velocities and properties - an immense data set which cannot be compacted to 1 parameter if we wish to examine the heart of these difficult mechanisms.

For example, microwaves can be used to warm food even though the microwave-emitting blackbody is very cold (cosmic microwave background at ~3K). This is because what the microwave oven emits is not blackbody radiation - it is a specific distribution that is much higher than what the food emits at that wavelength, so the blackbody temperature of the food is pushed towards higher frequencies by low frequency radiation (microwaves can be used to heat an object to glowing hot - a shift of a few orders of magnitude between input and output radiation frequency). So, the blackbody spectrum must originate from the electron dynamics of a body of atoms in thermal equilibrium (not driven by a microwave inverter, not an ionized gas atom, not an atom in a fusion reactor). I've already shown that the Maxwell-Boltzmann distribution is established when particles are allowed to elastically collide for long enough in 3D; an entropy view will say that the distribution has the highest probability of applying, while an information theory view will say it arises from incomplete information exchange among dimensions (won't work in 1D - there exchange is complete). It should be possible to similarly show that electrons, interacting by their elastic-collision rules, will also reach some most-probable distribution - the Maxwell-Planck distribution. Or, at least, that is what will be radiated - the actual vibrations may well be higher at low frequencies as mentioned earlier.

However in both derivations we have unwittingly introduced quantization - by speaking of individual particles or electrons. Will this still work without the notion of particle? I'm not sure yet {particles are required for a consistent model - this is covered later}, but I will fully explore more what it might mean if electrons are taken to be a field. Two convincing demonstrations of the "particle nature of light" are individual pulses on a PMT as it is mover far from a light source, and individual activated specks on photographic paper exposed to low light. In both cases photon-electron interactions occurred, and (temporally and spatially) quantized outcomes resulted. I have already hypothesized that photons are not quantized (field) and now am trying to claim that electrons are as well - but how can the experiments be explained then? In the PMT, the quantization does not happen in the output measurement circuitry (it can measure DC current, not only pulses), nor does it happen between stages (the stages would somehow have to convert a continuous impinging current to output pulses), which leaves only the electron-photon interaction as the quantizing source. If electrons are a field, this field will have local high and low points (like ocean waves) due to thermal interactions, and it may be set up such that the external impetus from the photon field is enough to excite a short current burst locally - both shortness and localization coming from random thermal inputs (and the existence of atomic sites - these are confirmed by x-ray and electron diffraction and need to be explained later). This leaves the possibility that amount of charge transferred may be continuous - but a PMT does not measure this. The Milliken experiment has been subject to some criticism, so I suppose the surest way to claim that electrons are 'discrete' is by looking at the well-ordered entries in the periodic table, each one incrementing its # of electrons by an integer amount, not continuously. And I think there is enough evidence to support the idea that electrons are independent particles even though we only ever observe them when they are sourced from or received by atom-containing matter.

Perhaps this is the origin of quantization - when coupled to the central potential of a nucleus, it is not simply "low-energy" oscillations but resonant (standing-wave) oscillations that will be stable against radioactive decay, leading to "electron energy levels" which find success in spectral lines and in solid state physics bandgap plots [f5]. I think it is no coincidence that the observed electron energies can be expressed as a circular wave around a nucleus that has some finite number of crests/troughs. The field laws are then set up in such a way that the standing wave will not radiate but any "transients" that aren't the right (resonant) frequency will radiate, so we find the atom at discrete "energy states" or "excited states", each one more short-lived than earlier ones. This gives a different view of light - it does not transmit energy (as we should have guessed from the no energy argument) but an energy difference, meaning when it interacts with an atom-electron system it not only "creates" a higher-energy electron, it also "destroys" the lower-energy oscillation so "overall energy" is conserved. Yet light is so simply represented as sin and cos that this requirement makes me question whether there is really any transition of the electron with incident light, or is it simply an add-on field oscillation that goes in concert with the pre-existing ones, being radiated on if it turns out to not form a standing wave and being "absorbed" if it does turn out to be a standing wave. Of course changing the atom potential will change what is a standing wave in that system, so light should be radiated or absorbed - this ought to happen in chemical reactions that produce light from energetic products (such as chemical lasers and glowsticks). The lowest energy states (electron oscillations) of the atom are stable, and this should be reflected in the model disallowing radiation for those modes (intrinsically making the efficiency of radiation go to 0).

There isn't much trouble in having electron quantization arise from being atom-bonded (other than why atoms are quantized in the first place), since even by classical laws we can set up systems which support a preferred frequency or frequencies of oscillation. Putting "a seashell up to your ear" shifts external white noise into a different distribution with some preferred frequency based on opening between shell and ear. A stringed musical instrument will audibly oscillate at the harmonics, despite being impulse-excited [f6]. In all cases an oscillating system with a given boundary condition is found to lead to quantized [f7] stored and radiated energy at the preferred 'resonant' frequencies. Could electrons be similar in this regard, setting up standing wave oscillations in the vicinity of an atom? Thus getting quantization out of "broad spectrum" excitation, like an organ pipe playing a note as a result of the white noise of incident wind. In reality quantized energies are not infinitely sharp either - emission lines are seen to be broad. If they were infinitely sharp, the slightest change in parameters (red- or blue-shift) would make it impossible for anyone to absorb the emitted radiation. Not all systems that oscillate lead to resonance or quantization - pendulum or orbits in gravity can exist at different frequencies and amplitudes, and no one is preferred or 'easier' than another. The response of systems like this to continuous/noise excitation then should not lead to quantization, but the real answer will require a mathematical analysis. Of course fields themselves are assumed to "self-excite" in a continuous manner to allow propagation of any wave (if field laws are viewed as applying to tiny point oscillators throughout space).

I move on to the case of quantization of light with photographic paper 'specs' representing "single photons". I already assumed that some mechanism makes atoms discrete, which naturally leads to discrete bound-electron energy levels, so it won't be too surprising to find distinct 'activated' and 'unactivated' chemical states (which are really electron states). The spatial localization of specs is taken to be due to thermal random influences on atom states affecting their ability to absorb or react to an external light field (film has to be kept cool and away from x-rays - no coincidence).

Previously I considered the idea of "local fields" and the relation to Huygen's principle. Here I would like to explore some real-world implications on interaction of light with matter. First one might ask just how "local" a local field has to be? As it would be inelegant to pick an arbitrary constraint I must concede that there really isn't a "local field" - the field, if it is real, must be continuous and extend over all space while behaving according to finite-propagation-time laws for a yet unknown reason. The local aspect relates to Huygen's principle - a wavefront can be represented at any instant by uniformly located point sources - which in turn is a way of saying that the field behavior is source-independent and therefore history-independent. By this long chain of thought we find that the field at any moment is only a function of its own configuration at an earlier moment (along with modifications due to interacting matter {which is also a field-made entity}) and nothing else. Once a source has emitted a wave, its disappearance will not impact the field. From this we see that the field is single-valued: despite contributions from many sources the "physical reality" is the sum of sources, while the individual sources are a "human construct" from our macroscopic view and interpretation. The laws are linear so we can apply them to individual sources for easier understanding, but "nature cares" only about the single summed field.

Now let's consider an application to metal and glass substrates that interact with light. If we ignore the single-valued field, we might interpret the metal as containing sources that always emit in anti-phase with the incident field: this results in perfect cancellation in the forward propagation direction and effective reflection of the wave - although really the original wave keeps traveling through in this model. Reflection and diffraction are emergent phenomena, all waves propagate forever. In the glass object, a volumetric set of point sources radiates slightly out of phase with the applied field, causing a wavelength-shortening effect that leads to the observation of a lower c and higher n in glass than in vacuum. We might even divide materials into di-electric and para-electric along similar lines to magnetic response of B to H. But I just argued that a single-valued (source-independent) field is a more elegant and coherent approach. In this view, the metal surface is one that maintains the e-field at "zero" [f8], so that incoming waves really are reflected and the field beyond the metal is not disturbed at all. The glass object is not that different in this view, but still the wavelength shortening is immediate and not a summation effect. One concern here is, with the field being immediately modified, how can the matter interact with an external source? For instance in the mirror case, with a single-valued field the field is always zero at the surface, so the metal does not "see" the field variation - how can this be? Likely there is some slight oscillation away from zero that ensures the metal reacts to the external field: real mirrors do heat up and lose light in the reflection, and also have measurable surface (n,k) values in support of this view.

The field model implies that emission of light/radiation is not a random process but caused by external influence that is the inverse of the dissipative reaction that takes away energy in the form of heat - in absorption we have photon->multi-body vibration (phonon), in emission we have multi-body vibration (phonon)->photon. This might be used to predict the form of blackbody radiation - the atoms must have enough individual energy (high-freq limit) and must be properly influenced by neighboring atoms to emit the energy as a photon (low-freq limit). For the case of radioactive emission, consider a Bose-Einstein condensate of radioactive nuclei - will all emit at once? Stimulated emission - lasers - show that light emission is not wholly random - this is just taking the inference to its logical conclusion. Though here it is important to distinguish between a 'true' photon source which is an atomic interaction like a downconversion that emits a quantized oscillation, and a 'broad' photon source like an antenna or blackbody or bremsstrahlung which emits non-quantized oscillation: the former will register as single photons in a coincidence experiment, while the latter will not (even at low intensity). Perhaps then radioactive decay is caused by external factors, some high frequency low amplitude oscillations (how could such oscillations be detected otherwise? like we can't detect photons without appropriately sensitive electron orbitals) then radioactive atoms are highly sensitive detectors of external fields and represent something about outside space, just like background microwave radiation. Maybe we can get some experimental evidence by seeing the power output of an RTG on one of the space probes like Voyager, and variations from expected power output on earth.

By similar logic I would argue Bell’s inequality can be explained by internal variables and also local interaction between measured property and experiment where the experiment angle affects local particle response. Consider similarity to ‘crappy voltmeter/ammeter’ idea. If we claim there is no "preferred" length scale, the nature of measurements should be the same throughout. Then measuring macroscopic phenomena with a detector like one in "quantum systems" (one that changes the system state due to its presence/opeartion) ie a "poor" detector, should also yield "quantum" effects in macro experiments. Otherwise a preferred (absolute) scale can be defined beyond which quantum phenomena do not occur. [f9]

The wave model proposed above is roughly as follows: all "particles" are in reality complex multi-dimensional waves that are self-resonant and self-sustaining in certain discrete amounts and at discrete spatial scales. The waves warp space around themselves such as to wrap around on themselves and appear stationary and particle-like. Where QM suggests faster-than-light concurrent events, we may reason that more elementary "strings" on which "particle waves" exist can move infinitely fast - then we conclude that particles, and all that we see, are "beats" and "group velocity" packets on the underlying "strings". Since group velocity can be anything, we have the ability to control particles'velocity. But ignoring that point for now and assuming that particles are actual "wave-packets" on a string, we have an easy explanation for some weird relativity effects. Consider a string: A-----B and a particle on the string A--~--B. What's the fastest way to get information from A to B? If we have a pulse A/\---B it will travel all the way along the string, taking finite time. But a very smooth pulse will have a very fast influence: A///\\\B immediately becoming infinitely fast in the limit that both A and B move at the same time. {Another alternative is "spontaneous excitation" but that would require exciting a mode all along from A to B: A~~~~~B.} This compares favorably to QM uncertainty: faster particles have a wider 'width' or uncertainty region. This also works with relativity: to faster particles the rest of the world appears compressed (relative to their line of travel) and to the rest of the world the clock of the fast particle seems frozen (since simultaneous motion of A and B with little "pulsing" of the string - just moving it uniformly up and down - does not provide for observable local evolution).

A field will tend to become equal across validly connected spatial structures - field gradients are self-limiting or stable (not necessarily true? black hole formation). Actions are driven by field gradients (electric, gravitational). The action of a field on an object cannot be "felt" as a force unless allowed - objects in freefall seem to be *weightless* from inside, not accelerated downwards. Yet these objects are spatially accelerating (as could be "felt" by measuring distance to other objects) and moving down a gravitational gradient - then what exactly is the notion of "force"? Mechanical force is a coupling of the inertial field to the electric field between atoms. All atoms have this intrinsic coupling - an inertial load (increase in potential) causes an increased electric potential (inter-atom compression or separation in a solid) which is "felt" as acceleration by humans. A force is only felt when there are unequal potentials - for instance a braking car, the brake pad+rim are at unequal inertial potentials which are caused to become equal by contact (and the in turn allowed information exchange mechanism). The resulting change in inertial potential, transferred to inter-atomic compression, is transferred as a "force" to the rest of the vehicle, since potentials will tend to equilibrate in conductors. Thus we see that atoms (solid matter) are conductors of force and of inertial potential, just as wires are conductors of electric voltage/potential. The "force loop" in the car slowing down is completed by the inertial field.

y-axis=Inertial potential; x-axis=location in car: 1=passenger, 2=brake/pad, 3=road
P ----____  while moving, the passenger and car are decoupled from the road
   1  2  3
P ---\\___  brakes are applied, and the two potentials are now coupled; they begin to equalize by conducting through the body of the car
   1  2  3
P -\\~~\\_  as the potential gradient spreads to cover the entirety of the conductor (car body) eventually the gradient reaches the passenger and road, both feel an acceleration now
   1  2  3
y-axis=gravitational potential.. near a large gravitational pull the potential is nearly flat on the scale of the spacecraft between 1 and 2, so it feels like freefall, minimal conduction of potential through spacecraft (note we can experience the same in buoyant conditions like diving! because inertial potential is spatially uniform there is no 'path' for inertial flow so a diver can 'float' and be effectively weightless)
P --------
   1  2
near a black hole, the gravitational potential has a big gradient, on the scale where different parts of the spaceship see a different potential so it is conducted, resulting in "spaghetti" effect
P ||\\----
   1  2

In other words, gravitational potential acts on all particles in a body in freefall and cannot be "felt" (because there is no gradient) while a body on the ground has a non-uniform effect from the grounded surface countering all forces (this propagates from the surface to the whole body to balance interatomic vs gravitational potential, seen as compression of the body downwards) which is how acceleration is felt. Any uniform potential (spatially or temporally) cannot be detected. A potential's "absolute" value cannot be detected (if it can even be defined) {see Aharonov–Bohm effect}. This generally applies to measurements: measurements must include 2 quantities which are XORed: the invariant (not affected by measurement) and the variant (affected by measurement). Because of the XOR, the "rest of the universe" can be effectively excluded, enabling real local measurement.

Measuring a length with a ruler: In this measurement, there is an implicit assumption that ruler length does not change what is known (to compate to other measurements).

Matter in different configurations can serve as conductors or insulators of potential. For electricity, metals (semi-crystalline solids with metallic bonding) serve as conductors while non-metallic bonds serve as insulators. Conducting properties are frequency-dependent! Metals may not conduct RF (and reflect light) for instance while vacuum will not conduct DC. Compare:

			Static	Dynamic
Electricity	1		2
Gravity		3		4
Inertial	5		6
1. Electric field (DC). Metals conduct, vacuum insulates (1/r^2)
2. Light/RF (AC). Metals insulate, vacuum conducts (far-field)
3. Gravitational field. Matter conducts (and contributes/increases), vacuum insulates (1/r^2)
4. Gravitational wave. Matter insulates (?) - vibrating motion damping and high energy required, vacuum conducts (far-field)
5. Mechanical forces. Matter conducts, vacuum insulates
6. Momentum/impact/free-float. Matter insulates (collision), vacuum conducts
Matter at constant velocity keeps moving in same direction. It is moving along its established "inertial field", the creation of which has required the input of the accelerating energy. This is similar to the function of the magnetic field in an inductor. Consider which objects seem 'attracted' and which 'repelled' by the inertial field - this shows the field as made of two spheres at front and rear of object.

The nature of space is that far away things are difficult for us to affect - influence of fields dies out with spatial distance {in fact this is how we define distance - a smaller action on a larger number of objects}. We can have action at a distance (such as communication/internet lines) by "tunneling/channeling" the information in space along a given path (wire or directed light). Free-falling objects do not feel acceleration, but objects on the ground do (from gravity). Objects maintaining their momentum also do not feel acceleration, but those changing their momentum do. This last point conflicts with the gravitational freefall observation, as we normally claim that momentum is increasing during freefall - perhaps momentum should be defined in another way. Traveling on a train, I see objects in front of me getting closer and those behind me getting farther away. Perhaps momentum, like inductance, is a type of field that "builds up" around a particle when it travels at speed. This field then causes other objects to either approach or retract from the observer/particle depending on the "direction of travel" or field orientation [f10]. The fields can be visualized in 3D as a sphere in front and a sphere behind the observer, being the attraction and repulsion spheres. Attraction and repulsion towards the observer along a given direction is proportional to cos(theta) where theta is the angle away from the direction of travel. There is also an associated curl field, which moves objects around the observer without moving them closer/farther towards the observer; this is of the form of a donut around the observer, proportional to sin(theta). The sum of the attractive/repulsive field and curl field for a particle results in the normal Euclidean direction of object motion.

What is the point of such a field approach? In the search for a unified theory we have a lot of faith in the concepts of momentum and particles. Is there a more fundamental way to describe momentum, or even space itself? Like Maxwell's equations which can generate photons, might there be some "matter fields" which have equations that can generate particles? Can we represent momentum as a field, such that gravity and force all arise from the field arrangement? Given particles described as interacting "matter fields", can we propose equations for the interaction between the electromagnetic and matter fields, and thus describe most phenomena on a unified theoretical basis - with only fields and equations, with no distinction of particles or photons? The possibility and power of such a model should make such representations worth pursuing. Plenty of straightforward evidence (gravity field strength law, F=E+vxB, momentum/acceleration F=ma [f11]) can be used to narrow down requirements for the "matter fields" and to verify them with existing "certain" knowledge. Especially valuable will be the study of electrons, which are assumed to be coupled oscillators of the electromagnetic-matter fields in some way.

It was seen above that the 'matter field' defined in terms of sin and cos, while instructive, still summed to the Euclidean field, meaning the notions of gravity and inertia and rotational acceleration were not explained or defined by the model. Here I propose that all 3 notions can be explained by matter's self-field interaction, implying that the field itself (gravitational field or matter field) exists independent of the matter. Whereas commonly the field is taken as the instantaneous function of particle location and mass (and so on), I claim that the field is constantly updated by the particle [f12], at finite rate, and can exist in states independent on the momentary particle properties. The notion of a mass comes from changing the particle's gravitational field so that it follows the desired trajectory (a form of "stored radiation" as opposed to far-field propagating radiation). This is not to say that the field is "difficult" to change as that would just re-incarnate mass as a field property, but rather that changing the field takes finite time and is done by the particles themselves acting on their pre-existing self-field. A higher force displaces the particles in their self-field more, leading to a faster rate of change (acceleration) of field.

\_._/ Particle in self-field at rest
\-._/ An external gravitational field tilts the bottom of the particle's self-field [f13]
\_\./_/ Right, an electric field (push force) moves the particle up in its (left) gravitational self-field, thus accelerating it and leading to observed inertia
\_.-._/ Gravitational attraction of two objects by means of mutually interacting self-fields

This matter field also mediates gravity, explaining why free-falling objects feel no "acceleration" even though they are clearly accelerating towards the ground. Centrifugal acceleration can be similarly explained - the fields can only propagate linearly, so forcing the particle into a circular path means exerting a force against its self-field which "expects" a linear path. Note that in the gravitational alternative of orbits, there is no centrifugal force acceleration, because the fields interact directly with each other. The field also needs to propagate gravitational waves, and can help explain why matter cannot reach the speed of light (since that *is* the speed of the field change).

O->O  linear path of field
.  
   .  circular path of particle leads to the self-field 'pulling' the particle back towards the linear path

Our original goal was to have a model of only fields though. Here with self-fields we are on the right track, but what does an accelerating object look like? In it, the electric fields of atoms are "compressed" and "push" the atoms against their self-fields (force from e-field = -force from self-field even during acceleration, explaining origin of inertia). We still have atoms as non-field agents. Perhaps, instead, atoms (electrons...) can be seen as high-field-curvature phenomena that constitute a high-order contribution to the basic field behavior [f14]. For instane we have Maxwell's laws relating E and B, and perhaps we can write a similar law for S, the self-field: S=grad(Q) or something. The particles (as experimentally evidenced - Heisenberg relation, interference+wavelength calcs, decay of particles such as gamma->e+ + e-, p->n + e+ + m...) are a high-curvature and highly localized variation of the field, for instance grad^2(S). Then we might alter the "first-order" law into something like S=grad(q)+grad^2(S), allowing high-curvature regions to self-perpetuate. But to address the observed field interactions enabled by electrons and protons (for example) - between the e+m and self fields - we need another interaction term such as S=grad(Q)+grad^2(S)*grad(E)*B or something of the sort which ought to lead to the self-field-based F=E+vxB. This will be the path to a field-only theory of matter interaction. This ought to resolve the wave-particle duality, especially considering equipment which is implicitly "quantized" can observe even a field as pulses or particles/quanta.

\_._/ First-order, particle surrounded by a field potential
\_____~~~~_____/ Zooming in very close on the particle, we see a high curvature region that is wave-like but spatially stable

Based on the self-field idea I propose the following test of the field concept, using an "electron accelerometer" to measure gravitational acceleration:

Consider a water container that is accelerated by pushing (forward). Pushing is an em-field effect, acting on the water inertia which is a g-field effect. Inside the container, the water will want to flow towards the rear, due to inertial resistance. Water pressure at the rear of the container will rise, and at the front will fall, because of this. However a pressure sensor mounted on the container will not be able to measure this pressure change because any water pipes leading to the sensor will experience the same variations in pressure but in the opposite direction and overall no effect will be measurable. Yet by using a stationary pressure sensor or comparing to atmospheric pressure, this pressure difference can be seen.

\_\./_/ Right, an electric field (push force towards right) moves the particle up in its (left) gravitational self-field, thus accelerating it and leading to observed inertia
\_./ \_./ \_./ From an outside view, the self-fields along the container all pull their particles back towards the left, the leftmost particle has highest pressure (P propto L?)
\_._/ \_._/ \_._/ From an inside view, all the particles are centered in the em+g self-fields so there is no measurable dP in this conservative field

Now consider a container of electrons that is accelerated by gravity (downward=right here). Gravity is a g-field effect, acting on the electron motion which is an em-field effect. Inside the contained, the electrons will want to flow towards the rear, due to em-field resistance. Voltage at the rear of the container will rise, and at the front will fall. By the same logic, a voltmeter mounted on the container will not measure any dV; yet by comparing to a stationary voltmeter or to some external reference, a change in voltage should be observed.

\_\./_/ Right, a gravitational field (force towards right) moves the particle up in its (left) electric self-field, thus accelerating it
\_./ \_./ \_./ From an outside view, the self-fields along the container all pull their particles back towards the left, the leftmost particle has highest voltage (V propto L?)
\_._/ \_._/ \_._/ From an inside view, all the particles are centered in the em+g self-fields so there is no measurable dV in this conservative field

In field theory, particles are interpreted as manifestations of the field,and particles that have mass interact with the inertial field while those that are charged interact with the electric field. But this introduces an unnecessary distinction between the particle and its influence on the field whereas the two cannot be separated - a particle is defined [f15] by its influence.

------- no particle ->  \_._/ with particle

The "atom" thus is not something that modifies an initially pristine gravitational field - the existence of the atom implies an existence of a nearby 1/r^2 field and vice versa. The two cannot be separated. (How can a massive particle be created, as in electron-positron pair production?) Same with photons - which then couple sender+received into one coordinated "particle". Further with this picture we note an inherent localization of energy:

\_._/ vs -\./-  shallow vs deep field lines around particle, shallow will be more spread out

While case 1 is 'lower energy' it is very spread out, and case 2 is higher energy but this forces it to be more localized spatially. MMaybe this doesn't make much sense because electricity is much stronger than gravity in any case and both are infinite-range.

This brings us to the point of units. Speed of light c is too fast - so we are too slow or too tiny. The latter is unlikely since we are huge in atomic dimensions so therefore we are slow/low-energy creatures. Atomic dimensions are tiny - so we are too big (if we use atoms as elementary measure for some reason). Thus we are both big and slow, but relatively more slow than big. This ought to scale to other physical systems - bigger ones will behave more slowly in a non-linear trend (but also, potentially, in a more complex/computational manner). It is as good a starting point as any to assume that the field laws, already stated to be local, should be both fast and tiny for the most basic phenomena.

Taking a rough shot at what particles might look like in field terms:
g-field --//\\----\\//--
e-field --\\//----//\\--
			1		2	1=anti-particle, 2=particle

A particle and anti-particle have completely opposite fields. The g-field is spatially stable since a=F/m and F propto m, so regardless of whether the g-field is positive or negative the direction of gravitational acceleration is to accumulate matter, thus both particle and anti-particle above experience gravitational attraction: there is no gravitational repulsion. E-field is spatially unstable because like charges repel and unlike charges combine+evaporate creating light - e-field oscillations transferred from a localized to a spreading form. Particles like atoms are spatially stable combinations of g-field and e-field warps, the g-field providing stability and e-field providing charge interactions. Moving around a charge 'lump' like --\/-- recreates predictions of Maxwell's equations.

g-field ----//~~~~\\----
e-field ----\\~~~~//----
				3		3=anti-particle and particle combine and annihilate, resulting in pure delocalized field oscillations that will soon spread throughout space

The world is progressing towards most stable field configurations. Fusion of light nuclei is possible, but complete annihilation is not - the ideal is ~Pb. Photons (and any delocalized field oscillations) are also very stable. So eventually we end up with spatially spread low-energy field oscillations and lead-like atoms, not all photons nor all atoms {at least because we haven't seen any appreciable antimatter in the universe}.

Does the particle's "substance" oscillate between the g and e fields, just like light oscillates between e and m fields? Or do both exist simultaneously and continuously coupled? A certain lag from oscillation would be a natural source of inertia [f16].

Oscillation 1: particle oscillates between mostly g-like and mostly e-like
g-field --/\-- -/-\- /---\ ------
e-field ------ \---/ -\-/- --\/--
Oscillation 2: particle's g and e properties 'spin' about a common origin which they cannot attain
g-field /\--- -/\-- --/\- ---/\
e-field ---\/ --\/- -\/-- \/---
Oscillator 2 can be seen as a "spring" oscillator in 1D, with both fields attracted to their central point (consider also how this affects quantum waveform delocalization/spreading?). In 3D, two fields could stably spin around each other leading to the notion of spin.

Gravity - the gravitational field {here I mean specifically the 1/r^2 potential aspect} - is the result of spatially bound/localized mass-energy. Then the same way the electric/magnetic field is the result of spatially localized electric energy. The field itself (gravity or em) is an unbound state that is infinitely spread out, so any energy which can enter this unbound state readily spreads and becomes unrecoverable (similar to heat). The field can be seen as a single giant particle that extends over all of space, like a Bose-Einstein condensate on a huge level. No matter how much is added to the field, it will retain an overall "zero at infinity" value, ie only relative differences in the field are measurable and given enough time without perturbation these differences tend towards zero. The field is continuous, in space and in field magnitude, while bound states are discrete - in space and in energy storage, the more highly localized/bound spatially, the more highly discretized (which will also apply to a bound field, ie inside a Faraday cage/microwave/harmonic oscillator, forming the intersection between particles and fields). This is how I propose to explain the wave/particle duality: light is a continuous field, but we only measure it with discrete atoms/electrons/molecules so we see dots/points instead of smooth contours. Energy is conserved statistically, but we don't have the ability to really see this process because all our experiments are particle-based and thus cause discrete outcomes which invoke the idea of a perfect (rather than statistical) one-to-one correspondence between energy transition in one atom and reception in another. As we look at smaller and smaller timescales, we see this possiblity of energy non-conservation [f17] but statistically it is always averaged out - we could call it an intrinsic uncertainty of physics, but I think it is easier to say that physics operates deterministically and it is our lack of knowledge about the full field arrangement that gives the appearance of 'random' and uncertain events. Particles can be unbound and permanently "dissipated", as in electron-positron annihilation which creates two gamma rays (e-field oscillation) and a momentum transfer (g-field oscillation). Such oscillations of the field must correspond to the bound state (indeed we can have the opposite process of field->particle if carefully set up), but somehow capable of escaping its spatial bounds. This unbound state is the entropic ultimate end-goal for all energy, as it represents infinite spatial spreading - the most possible available state. We are then quite lucky that we have so much bound energy all around us - perhaps we are at an early stage of the universe's life cycle. The quadratic potential fields in gravity and em are stable over time and spatially centered on a particle - unlike field oscillations (photons) the quadratic potential does not require energy to maintain. Transmission of information (energy level change) from one place to another could be done along a spatially bound (discretized) system or by using the field oscillations to transfer energy - communication cannot be done with a static quadratic potential field, so it represents the stored energy of the central particle but does not transfer this energy away from the particle. How can this static field be maintained? Perhaps it is not so static ultimately, emitting impulses at regular intervals as outlined earlier for gravity, with the impulses spreading to infinity and adding up over time to a quadratic potential.

The field then interacts with all bound states (including the originating particle - perhaps keeping it bound/localized by the very interaction - for instance the attraction of gravity when considered as the point of emission (at a nucleus or a subatomic particle) perhaps is so strong that it holds the particle together - then the rest of the external field is a consequence of this central localization boundary being so strong), causing discrete changes in bound states and transfer of energy in discrete amounts. Consider stimulated emission: the e-field interacts with excited bound states in a way that causes them to transfer their energy into the field coherently. Can we have stimulated absorption, ie a coherent transfer back to the bound state? I think so, indeed because most populations are not inverted, inversion being required for stimulated emission, I imagine that stimulated absorption occurs often on these "absorption-inverted" (ie normal) populations. Bound states that do not have an appropriate discrete energy level are unstable - they will interact with the field around them in a non-energy-conserving way until either extra energy is dissipated or missing energy is obtained - this is why we observe discretization in actual bound systems but energy can still be conserved statistically. In fact I don't know if it makes sense to have non-stimulated emission or absorption: there is a reason for each energy transfer, including emission from an excited state (which could be a combination of surrounding e and g field oscillations - thermal energy (blackbody), incident light (photoelectric effect)) and of course absorption by an unexcited state (due to oscillations in surrounding fields from both internal and external factors - no coincidence that photopaper and photodetectors are all temperature-sensitive devices as thermal oscillations change how much (if any) light is needed to make a discrete transition possible).

There is a distinction to be made here between the field and energy. If the field itself is created by continuously-emitted pulses that spread out, then the pulses must have no energy/require no energy to generate; at the same time the field must be able to transmit energy differences as oscillations - so it sounds like we can extract energy from an energy-free emitter? Since we never observe this (and our universe is stable - ie this runaway process hasn't yet occurred anywhere in the universe) I will claim that it is impossible to thus extract free energy - because any lowering in magnitude (ie getting closer to zero/flat) of the field corresponds to a delocalization of the associated bound particle, all the way to the limit of the fully unbound particle and zero field. Particle localization and field magnitude are always strictly related, so cyclical processes conserve energy. The reason for this relation in the first place is to communicate information about localization to a particle's surroundings: without a field surrounding the particle, how could any nearby particles know of its presence? And if they don't know of its presence, can it even be said to exist? Consider the charging of a van de graaf "globe" - in the process I extract electrons and localize them by forcing them onto the globe - this takes energy but as a result of the localization the e-field around the globe grows appreciably. Then I can bring a grounded object nearby - it will be attracted, in the process I can recover the energy I put in towards localization but doing so means I am gradually de-localizing the electrons and reducing the external e-field of the globe. All de-localization is entropically favored, all localization is entropically hindered, thus to achieve localization I must delocalize other particles in the process - the second law of entropy. This happens in gravity - unlike the e-field case above, gravitational fields can "create themselves", starting from diffuse atoms that combine more and more, becoming more localized, increasing their g-field, and thus attracting even more atoms to localize. This seems like a negative-entropy process but in the underlying localization information becomes dissipated in the field, so entropy still rises - in this situation the information that is delocalized is in the form of heat (photons) released during inelastic collisions, with this delocalization the gravitational "clumping" becomes irreversible, the initial KE of the particles being de-localized (from the localized momentum form) to the photon (e-field) oscillations, which happens independent of the creation of the g-field or its associated potential energy (which also escapes as photons but is not sufficient to cause "clumping" localization - being just an energy exchange between g-field and particle localization). The g-field happens to attract other massive objects, allowing clump formation - so increasing entropy is not to be interpreted solely as dissipation (even though it remains true that dissipation/delocalization is the ultimate high-entropy state, ie even atoms on a planet would readily delocalize as photons if given a chance), and indeed can lead to self-assembled structures which promote a fractal-like optimization favoring dissipation (which should not surprise us - ourselves being complex self-assembled organisms who feed off the dissipation stream of the sun).

Fields (electric, gravitational) like 1/r^2 potential are dynamic entities that require a spatially bound particle to exist. Fields do not dissipate energy, though field oscillations (photons, gravity waves) do. If the particle creating the field disappears, the field will oscillate as it returns to its zero state, thus dissipating energy - from this I argue creating the field in the first place also takes finite energy and is part of the energy cost of creating the particle. This energy is expressed as a force on the particle, which remains constant as long as the field is there, likely playing a key role in binding the particle itself (high fields can cause particle decay: such as 1.2 MeV electron->gammas). Normally this field force is constant and takes no energy, but if the particle is moved relative to the field, the force can take energy, leading to concepts of mass/inertia/electromotive force/inductance. If my approach is correct, such a quantity will be present with *any* field. The self-interaction force also sets the maximum amount of acceleration the particle can withstand - exceeding it will result in the breakdown of the particle and radiation of its energy via the field. Such breakdown is designed for in particle physics experiments like LHC, and is also seen in nuclear reactors where U-235 is weakly bound in terms of the neutron (strong force) field, so even though it can handle large accelerations of F=ma sort, it still breaks apart from thermal neutrons that exceed the local nucleus stability thresholds. Bound particles are entities existing inside fields, entities that are possible due to complex interactions between multiple fields that can stabilize a "standing wave" and keep it spatially localized instead of spreading to infinity - this requires quantization of the standing wave stable states to strict energy levels, even though the particles remain capable of interacting with the field in a continuous manner - the non-discrete aspects are simply radiated away and we interpret them as unstable. Such interactions between fields leading to bound particles are the only "scale bar" we can have - it is pointless to evaluate a field in an absolute sense as it acts the same at each scale. When coupled to other fields, though, and given the speed of light, it becomes possible to generate spatially stable configurations and such configurations will have a certain spatial scale. On the flip side, it should be impossible to have any particle consisting of solely one field - such one-field entities can only be spreading-out waves not spatially constrained (because there is no scale bar to constrain them!). Fields are a sole way of transmitting and storing information, as particles or as waves. Waves travel unconstrained in the field - at the speed of light. As particles are an emergent entity of fields (that have the capacity to stay in place), they can be aceclerated up to but never reach or exceed the speed of light (if they could reach it, they would be turned into waves - impossible to constrain spreading). This also means invisible/disappearing particles are impossible: they are created with a field interaction, and through to their destruction always interact with the world through the field.

To be clear, I claim that particles (like electrons) and waves (like photons) are *essentially* different entities. The most obvious difference is particles are massive and can be accelerated/decelerated to any speed between 0 and c, while waves are massless and must move at the speed of light. Particles are individual objects that exist on a specific length and energy level, while waves are global objects that can exist at any length scale and amplitude - an antenna emits unified field oscillations/waves but an electron filament emits separate/individual electrons. The applicability of equations like Heisenberg's uncertainty and DeBroglie wavelength to particles is not taken as indicative of some "wave-particle duality", rather the effects described by these equations are seen to be indistinguishable from the effects of not having good control over atom-scale phenomena {and that, on a fundamental level, both particles and waves must follow the same field laws as they are created by/within fields}. For instance, an atom released from a trap holding it is seen to "spread out" spatially following a Gaussian distribution: QM says that is due to the wavefunction spreading, I say it is due to us not knowing what momentum the atom had when it was released from the trap (we have no way to check without perturbing it!). The math looks the same but I think the latter is easier to understand and has a physical reality that is less fantastical than "wavefunction collapse".

There have been a number of brilliant experiments that show why photons should be treated as particles, for instance [http://people.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf]. My view on these results is that it is certainly possible to make very precisely quantized and discrete field oscillations but this does not mean the field itself is quantized {just because I can make precise frequency sounds with a loudspeaker, I do not claim that air is quantized!}. A 'rough' oscillation source like a light bulb filament will create all sorts of electron oscillations that will add up to waves that go in all directions and at varying frequencies; a laser will create coherent oscillation at a single frequency and that has a specific spatial direction of propagation, and this propagation is within the framework of Maxwell's laws. Further breaking up the directional laser beam into temporal chunks one can indeed get to 'minimal' oscillations, beyond the level of which any lower spatial or temporal breaking-up will not transmit useful information that is expected - such as photon frequency which would require perhaps a wavelength's worth of spatial and temporal extent. These small discrete oscillations will then require extremely sensitive detectors to measure, and fast electronics to be able to judge temporal characteristics. But after going through all this trouble to create quantized and discrete field oscillations (and exclude the rest of the surrounding light), should one be surprised when the experiment shows that they are quantized and discrete [f18]? Are these particles? They are spatially and temporally discrete, so in that sense yes. But they must always travel at the speed of light and cannot interact with each other or evolve, so in that sense no. Ultimately at this point I don't have enough of a theoretical foundation to convincingly resolve this issue, but the field model proposed above is philosophically and metaphysically much more tenable. Certainly the QM mathematics, combined with tricks and shortcuts learned in the course of a physicist's career that determine how the mathematics is to be applied to a real experiment (based on what is seen to describe experimental results very well), describe the experimental results very well. Still I believe local explanations exist for photon and other quantum experiments, that will result in similar math outcomes but a more understandable world picture [for example https://doi.org/10.1126/science.1202218].

What's the difference between fields and moving particles? Are photons particles? I think photons are purely field oscillations, not particles, because: they all travel and must only travel at the speed of light; hitting the earth from any of the stars by a stream of particles requires a tremendous number of particles whereas a field automatically decreases in amplitude as it spreads; photons as particles means tremendous information transfer - each particle has 3D space+velocity associated with it as well as spin/polarization, and where would all this information originate from? A field hugely reduces the information transfer between (say) sun and earth, so instead of perfect 1-to-1 momentum vector conservation, which would be absurd (as then what happened to the momentum vector of an electron just before getting hit by a perfectly conserving particle-like photon?) and gratuitous/excessive [f19], only the statistically observed conservations apply and only key information is transferred from the sun: at the earth we only care about what the e-field oscillations look like rather than getting bombarded by unimaginable numbers of perfectly conserving particles - "photons" [f20]. Waves, as they always spread in fields, are not capable of interaction with other waves or computation/state evolution. They pass through each other in an additive manner, any interactions being mediated by particles [f21]. Particles on the other hand can interact with each other and evolve/do computation, but cannot communicate the results except by field waves (or emitting particles, but such emissions must also be accompanied by waves of the emitted 1/r^2 potential at least, to communicate the presence of the particle to the rest of the world).

Particles only "see" their local field, so it is reasonable that they create a 1/r^2 potential by fixing the curvature/slope of the surrounding field, just like we do when solving for the field mathematically. Whatever the particles do maintains an appropriate field around them even when the external background field changes [f22] even though the particles are "blind" to that change initially - so the mechanism must use field properties, and fixing a curvature/slope is an option that works - though I'm still not clear intuitively as to why this would be reasonable. Further support for the field view of photons: emission/absorption by large structures like antennas and wires; emission/absorption by tiny structures like molecules; easy handling of spreading by field: for the sun to emit a particle it must supply direction, whereas field automatically handles spreading to all space; explanation of Doppler frequency shift being able to attain any (no quantization) frequency; explanation of Bremsstrahlung radiation emission by moving particles.

What properties must the field have to work in the observed way? It must allow the interaction between spread out particles by affecting each particle's immediate surroundings. In the absence of particles, the field strives to equalize its value over all space, in the process all energy is dissipated to infinity - this is what drives spreading/dissipation. Yet we see, in addition to the "ultimately stable" zero state, the 1/r^2 field is also temporally stable. Thus at each point field-intrinsic forces must balance out:
---\\\  1/r^2 potential edge
 <- ->  These pulls must sum to zero effect for the 1/r^2 field to be stable as we observe it to be
 
Because the edge curves, there must be fewer (in 3D) high-slope areas and more low-slope areas to balance out, and this causes field weakening as one moves away from the source. Then the 1/r^2 field is stable everywhere except at the origin [f23]: here the pulls are *unequal*
---\\\   ///---
    <- ? ->
and in the absence of a particle there to equalize all the pulls the 1/r^2 field will immediately radiate away - this enforces that (say) the gravitational field of the earth is kept accurate even if we carry out mass-reducing nuclear reactions here at the surface - the effect on the moon, however small, is still present and gets communicated from the source of the nuclear reaction all the way to the rest of the universe by the field's initial instability at the 1/r^2 origin of that reaction or particle. With a stable particle, the "problem section" at the origin is kept in force balance by whatever the particle does to "warp" the field:
---\\\   ///---
      ~.~      Particle at bottom of field maintains required curvature to keep the rest of the surrounding field from "springing to zero"
and thus the rest of infinite space where the field exists is kept inherently in a steady state. This is elegant information-wise as it means the particle only cares about its immediate surroundings, and the field takes care of all the interactions with the rest of the universe - keeping everyone "up to date" on what everything else is doing. How could this be possibly done with particles? When a chemical/nuclear reaction causes loss of mass, it sends incomprehensible numbers of "gravitons" to every other bit of the universe? It boggles the mind that this would be a good way for the universe to work (for one, how would each particle know exactly where to send each "graviton"? and where would that universe's worth of information come from to be present at one particle?) Fields thus mediate interactions - it is no accident that 1/r^2 perfectly preserves probability per surface area, thus the spreading information and its effects are statistically conserved.

It is important here to not treat the fields in a too classical sense, because their interactions with particles are to be interpreted differently. In Newtonian gravity, the 1/r^2 potential of a planet is taken to be the base foundation, and point particles evaluate the field value they see and accelerate based on that. In my formulation, the particles are themselves constructed intrinsically of fields, thus there is no "acceleration" or "force", just motions of the field - recalling that in freefall none of the objects involved experience any felt acceleration - they just move according to their local field laws. So instead of a gravitational force balancing a centripetal force, I say "this is just how fields evolve". Additionally, fields can only be evaluated locally by particles, and in order to understand their surrounding field the particles must sample different points around them to pick an acceleration direction - there are no point particles and no absolute "zero at infinity" values. The fact that we can define forces and accelerations is a consequence of the symmetries of the underlying field laws - as I argued with "symmetry theory".

Indeed fields play an important information role: they only allow the transfer of very specific, and minimal/combined, information. Millions of millions of photon-emitting reactions in a lightbulb are automatically combined by the field into a single notion of wavefront, and this is all that the receiver can make use of - there is thus an effective information decoupling: fields act as one-way (infinite) couplings [f24], but in doing so must necessarily dissipate energy/entropy as information that enters a field must continue to spread. This field coupling is different from the original infinite-coupling example of a mechanical valve, which uses spatial binding along perpendicular directions to achieve its "infinity". The field uses spatial distance+time. This also suggests another infinite coupling: distinct fields, for instance using an electron (which is massive and charged) to move/affect a particle which is charged but massless (and thus being unable to resist the mass effect of the electrons directly). This is seen in "allowed/prohibited transitions/reactions", I believe.

How does a particle in a field behave? Assume one side of the earth's g-field: ---\\\
The particle makes a tiny curvature in the field immediately surrounding it:  ---\./\\\
As the field stabilizes, the particle's own 1/r^2 potential is created: ---\_._/\\\
Eventually the 1/r^2 effect reaches the earth, so the earth is pulled towards it (barely)
As the field stabilizes, the particle finds its immediate field edges are unequal: ---\-._/\\\
This difference is due to the earth's externally imposed 1/r^2 g-field of course
The unequal field edges cause the field to adjust itself to be closer to earth: ----\-._/\\
Closer in, the field difference keeps increasing, and we observe "acceleration" of the particle
In all this, the particle still never sees anything beyond its tiny edges, but the particle must be finite sized in order to compare field levels and know which way it should accelerate.

Its finiteness also keepts its own 1/r^2 potential from becoming infinite at the point origin. A rule that could lead to the observed "acceleration" into a 1/r^2 potential is one causing different clock rates at different field "stretch" amounts, so the field-piece of the particle closer to earth evolves at a different rate than the piece farther from the earth, the resulting phase shift leading to field motion towards the earth. This would be commensurate with general relativity timing effects. Or we can assume "space warping"/curvature as relativity does, but I still don't intuitively see the value of this. One issue with claiming such warping is what to make of particles that interact with multiple fields, like electrons which have both e and g fields around them? Do they warp both, is there an electric-field-space that can be warped/curved independently of real (gravitational) space? This seems unlikely as the resulting degrees of freedom are far from clear. At least time can be said to apply to all field evolution in a similar way.

Fields will extend to cover all of space. We find, though, that it is possible to construct particle-based (spatially stable) structures that can block out fields from a given volume. While I don't think we have any materials that can block the g-field, we regularly use metal enclosures to block the e-field. This type of structure decouples the large "infinite field" from the tiny field inside the enclosed volume {this separation may be only at specific frequencies of field oscillation}. In the absence of the boundary conditions typically supplied by the rest of the universe (which are now blocked by the structure), what can the inside, decoupled field do? How will it behave? I believe that the states this field takes on correspond to maximal integrated-information, and thus qualia {this decoupled field could also be said to contain some number of degrees of freedom (in terms of external effects on internal states), depending on the way the bounding structure is set up}. Qualia are simply the field rearranging itself to satisfy all available constraints. Just like the e-field at large answers how a "big" antenna can radiate - which electron is responsible for the photon emission - they *all* are as they all contribute to the field which remains a unified entity nonetheless [f25]. So inside an e-field decoupled from the rest of the world, all electrons/transistors/particles contribute to create a unified entity, and this represents the spatial localization and origin and unity of qualia, as well as its extents (in space). The extent and complexity of the system in which this field can exist (in terms of spatial shape) represents the full qualia space - all the experiences/qualia that this system's field can experience in theory. [f26] The field will reconfigure itself so as to go away from unpleasant and towards pleasant states, using the particles it interacts with as memory of which way it should go. Qualia thus has a sense of temporal continuity - it exists continuously in time just as the field does (previously I argued that qualia are pulsed and this makes our existence moment-to-moment, though I've written earlier why I don't really feel this way from introspection). But does this really make sense, as all conscious-like systems I can think of require energy spreading/dissipation - otherwise the field remains truly static, with no notion of time - why should it experience qualia if it always stays the same objectively? Perhaps qualia like we experience seem to persist in time because our biological mechanisms constantly re-charge [f27] the chemical/electrical reserves available to the field, and the field constantly keeps finding ways to optimally dissipate this energy - it being received and given back to the decoupled rest-of-the-world through some sort of infinite couplings (ie using particles, which are an infinite coupling for the *field* just as the field is an infinite coupling to the *particles*). This is in line with consciousness as a key component of entropy - seeing the conscious system as a "black box", we see high-quality energy in and low-quality (dissipated) energy out - this means evolution has taken place and evolution means conscious/qualia-like entities directed it.

The acceleration of gravity is different {really shouldn't even be the same word!} from that of a (say) car. In a car, acceleration = force felt, while in gravity acceleration = no force felt and resisting acceleration = force felt. Thus gravity shifts the "zero point" of acceleration. Orbital motion, instead of being treated as a balance of forces, is then seen as satisfying the zero-point acceleration requirement, continuously free-falling but going so fast that by the time the object falls toward the earth it is already far enough moved perpendicular to gravitational line that it can just keep falling and oscillate in 2 dimensions simultaneously:

 /-\			|
|   | = ---- +	|
 \-/			|

Circular motion without gravity requires an imposed centripetal force to match the acceleration of gravity, and this force is *felt* by the object. Gravitational "force" is thus quite a conceptual blunder. Then again, maybe there is a gravitational force but it is not felt because it pulls on all particles more or less equally while in a car the force is concentrated+transferred by contact with the seat... Does either view help in "resolving" the relative difference between seeing the world accelerating towards oneself and being able to tell whether it is me or the world that "is accelerating"?

Numerous experiments show interference of particles like electrons, neutrons, or even whole atoms. How can this fit in with the field model I have proposed? Consider the observations: faster particles (vs reference frame of experiment) have a shorter wavelength/smaller size, particles can show wave-like behavior when made monochromatic (same energy/momentum vs lab reference frame, small aperture), particles cannot be accelerated to the speed of light and light cannot be slowed down at all. Particles in my view are spatially localized information, evolving at the speed of light but with no net motion, like a photon bouncing back and forth between two parallel mirrors: |~>|. Now, the stationary particle is bound spatially so it cannot transfer any information to nearby space. However, the particle can be accelerated and then it does transfer information to nearby space (in the direction it is traveling) - in this sense it behaves more like a pure oscillation like a photon. The more it is accelerated, the more it takes on all the characteristics of a pure oscillation, like a photon moving between mirrors:

----
/\/\
----

and this oscillation character is *directional* - in the direction of relative motion between the particle and lab frames. Earlier I argued how the 1/r^2 potential can be established by a dynamic system of spreading pulses in the field which get continually emitted by particles that are stationary:

--||-- 		-\  /-		\____/		-    -
  ||	+	  \/	+			=	 \	/
  ||								  --

Now consider the particle accelerating: in its own frame, during acceleration it feels its own push against its self-field, leading to the whole notion of mass (and *gravity as perfectly proportional* to mass), and once acceleration is over a steady symmetric self-field returns. But in the lab frame, the self-field of the moving particle is *not* uniform! Since the field oscillates at the speed of light, a fast-moving particle has a skewed self-field like a wave shock front:

--||-- 		-\  /-		\_   /		--   /
  ||	+	  \_|	+	  \_/	=	  \	|
  ||								   -
As the particle is accelerated towards the speed of light, the bottom of the self-field tends to -∞, so such acceleration is energetically impossible [f28]. From the top view, also the field looks quite different:
									  ---\
 /-\		 /--\		 /--\		 /     \
| . |	vs	|->.|	vs	|-->.|	vs	|  ---> .
 \-/		 \--/		 \--/		 \     /
									  ---/

Tending towards a 90-degree 'light cone' in front of the particle as it reaches relative speed of light. Note this means that a 'snapshot' of a moving particle is inherently different from that of a stationary particle. The stationary particle will have a uniform/symmetric surrounding field and the moving particle will have an altered/asymmetric surrounding field; both field arrangements are stable in time but the asymmetric one requires that the particle moves spatially. As the particle is accelerated, it should reach an ultimate size limit of its "core" which will be fully exposed at the front of its self-field at the speed of light [f29]. If particles really do emit pulses with a characteristic frequency, then this apparent frequency will be increased from the point of view of an approaching particle by the Doppler mechanism in the 90-degree 'light cone'. And, with the particle more "exposed" and point-like at high relative speeds, the probe abilities become finer, or "size" becomes smaller, this effect is employed in electron (and even helium ion) microscopes which achieve higher spatial resolution with faster particles. Observed diffraction patterns of particles come from the lagging field being pushed forward past the particle and then back again during a reflection, acting similar to pilot-wave models of QM [Bohmian Mechanics/De Broglie–Bohm theory] except the pilot wave is local to the particle and interacts bidirectionally with the particle [this can even be recreated with a macroscopis droplet on a water wave field https://www.youtube.com/watch?v=W9yWv5dqSKk which also matches the above asymmetric-field claim]. Note that relativity demands that the interactions lead to the same effect no matter which frame is taken as observer:

  ---\									  				   /---
 /     \	 /-\		 /--\		/--\		 /-\	 /     \
|  ---> . +	| . |	vs	|->.|	+	|.<-|	vs	| . | +	. <---  |
 \     / 	 \-/		 \--/		\--/		 \-/	 \     /
  ---/									  				   \---

It is common in physics and chemistry reaction diagrams to show the concept of "energy", for instance showing fission as releasing atoms+neutrons+energy. But what is this energy and how is it released? I had wondered this for a long time, in fact it would probably be accurate to say this very question is the reason I wanted to study nuclear engineering. What is this "energy", really? From what I understand now about crystal lattices, heat phonons, and radiation slowing-down interactions, I would say this: in a nuclear reaction, some type of info exchange happens that makes the self-contained information of the nucleus into information that is able to propagate outwards quickly: the kinetic energy of the split nucleus fragments is what is meant by "energy" - whereas initially the atom was still, its fragments end up moving apart very quickly (same momentum but high KE) and by interacting with surrounding atoms eventually bring about an increase in crystal lattice temperature/vibrations - heat. There is a 'temperature' of nuclear reactions - the highest crystal temperature that a reaction could bring about, which must be very high indeed for a nuclear reaction - fusion is indicative of such temperatures; using nuclear power at a low temperature like 300C then is quite a waste entropy-wise {that is to say, there is a lot of wasted "computational power" that ends up going into the not-useful-to-us evolution of the crystal lattice}. In the course of the nuclear reaction itself, the info exchange is strictly local - only felt by what has been the nucleus, and affecting rest of the material much later by individual interactions now from fission fragments.

Why do these fragments fly apart explosively? Well, gravity is acceleration of objects toward each other. Anti-gravity then is acceleration of objects away from each other. Nuclear fission (or any energy-releasing reaction) causes a loss of mass in the final product - this loss of mass must cause an outward propagating ripple in spacetime which ensures a balance in the gravitational field around the past nucleus is maintained - since the mass is now lower, the gravitational attraction also needs to be lower. To conserve total gravitational field, and communicate information to the rest of the universe, the gravitational change must be communicated as a spreading Gaussian with constant spatial integral.

   (..)			  <-..->        <-. O .->	   <-.  O  .->			O
 
--\    /--	->	--\    /--	=	-\     /-	->	\/\   /\/	->	--\   /--
   \__/			   \__/			  \/-\/			   \-/			   \-/
				
			+		||
				---|  |---

In fission, the initial nucleus (..) splits into two fragments <-. and .-> and a remaining fragment O. The gravitational field during the fission event behaves as if an anti-gravity impulse has been added, and the . fragments "ride the wave" of the antigravity impulse spreading which accelerates them while leaving the remaining O fragment with a lower gravitational field. When the reaction happens the anti-gravity impulse due to loss of mass is most concentrated at the nucleus, so it has a very strong impact on the fission fragments which it pushes apart at tremendous speed. By the time the impulse extends beyond the nucleus, it is so weak as to have a barely measurable influence - however little it takes to indicate the loss of mass of binding energy at a distance.

Consider rocket propulsion: chemical burning creates reactions that are anti-gravity in nature, and this pushes the rocket up against earth's gravity. In all this, the gravitational field around the earth and the rocket are kept accurate, by the spreading wavefronts of antigravity impulses. Now, since reactions can both lower and raise the mass of the products (by releasing/absorbing energy respectively) it follows that we can also emit gravitational impulses that signal an increase in mass and pull objects together. So consider a propulsion system:

<-[ Mass-lower reaction ]-> * [ <- energy <- ] : -> [ mass-increase reaction ] <-
<-[     antigravity     ]-> * [ ->  flow  -> ] : -> [        gravity         ] <-

-> [ Net force ] ->
<- Net momentum emitted <-

As with the idea of antigravity impulses spreading out from a center into infinity (and this spreading being the entropically most absolutely favored state), I would now like to view gravity as also formed by such impulses. A gravitational potential is then not a static entity, but a *dynamic summation* of a time-integrated set of emitted gravity impulses ("gravitons") which all tend to spread to an infinitely diffuse state [f30]. These impulses are always emitted by any energy which is made to be localized - ie mass-like or evolving in time rather than in space. As soon as the energy is allowed to become non-localized and to the extent that it is space-like (not localized, spreading at speed of light) the gravity impulse is not emitted, and perhaps an anti-gravity impulse is emitted (this may amount to the same thing: \_/ + ^ = \-/), thus any spreading type of energy is anti-gravity-like (repulsive) as we see with photon momentum repulsion. Any attractive interaction (gravity, strong force) are emitted by and lead to localized time-evolving particles (planets, nuclei - all stable in space and evolving in time). Repulsive interactions are the opposite, a result of less mass, and causing other surroundings to also become spread out in space (evolving in space - traveling at c, not evolving in time).

So if a single impulse spreads like:
--||-- 		-\  /-		\____/		-____-
  ||	->	  \/	->			->	 
  ||								
Now we emit impulses periodically:
			--||-- 		-\  /-		\____/
			  ||	->	  \/	->			
			  ||							
						--||-- 		-\  /-
						  ||	->	  \/	
						  ||				
									--||--
									  ||	
									  ||	
And all these sum up to:		   -\    /-
									 \  /
									  --

The result is a quadratic potential surrounding the impulse source. If the period of emission is related to how often energy takes on the same states (2*pi rotation for example) then spatially smaller things are heavier (nucleus vs electron).

The conclusion is that locally bound energy seems to attract other locally bound energy, while unbound energy seems to repel other unbound energy (completely diffuse spreading). Why should bound energy be attracted to each other? Perhaps because this attraction ensures that eventually all bound forms will be able to interact and release their energy into the unbound form - the ultimate entropic goal - for escaping the local bounds may require a transition only enabled by another massive object (this could also be the meaning of life - to help release energy/increase entropy in ways that "dumb" atoms cannot). Why should bound states seek this release? Because that ensures that the universe, eventually, reaches a "zero", infinitely spread out/diffuse equilibrium state, that no particle or entity can survive forever without any mechanism to destroy it - this is illogical on philosophical grounds as then such a particle/local state must really exist forever and even outlive the universe - whatever that means, like an indestructible object. I say that the universe strives for such an infinitely diffuse state and is constantly evolving towards it - but why? With the big bang hypothesis, the universe started out as a point - the dual of the infinitely diffuse final state, both 0-dimensional. Indeed every entity we talk about has a dual in this sense - whatever we describe, we also describe the rest of the universe that is left out. There is always an 'inside' and 'outside', in a control volume or surface or object. Why should the universe go from one 'zero' state towards another? Maybe both are equivalent and the progression of time is illusory - though I've argued that our sense of "now" strongly indicates time is not just made up but a consequence of evolution (temporal iterative change in systems). But if it is illusory, why do we feel it going forward, in the diffuse/entropy-raising direction?

If I imagine going backwards in time, then I really will have memories of the future - my past in forward time is my future in reverse time, so I will know exactly what "will happen", but be unable to take any actions to change or alter it. In forward time, I don't know what will happen, but I can take actions to alter it [f31]. Quite an amazing duality. Why do I feel time going forward? Because I am after all a localized system, and as such I can only evolve/learn/memorize by increasing entropy - releasing some of my energy so that the remaining localized system is prepared for future encounters with memories of the past, memories made only by dissipating energy [f32] and thus placing it beyond our localized reach. For localized systems like us time runs forward, for spreading systems like light time doesn't run at all (indifferent to concentration vs spreading - all like a single time instant to the impulse, though not to us external localized observers), and for diffuse systems like the universe maybe time even runs backward (ie entropy-*reducing*). Reverse time would be a weird thing indeed. If forward time is the realm of "my control" and individuality, local actions and influences, reverse time is "divine control" and complete lack of autonomy - at any moment energy from far reaches of the universe concentrates precisely in a given way to cause you to do something, "remember" some future action, or spontaneously bind energy into localized structures. The universe then can play and do all sorts of things to modify the localized systems, keeping a record of it in the "diffuse memory".

As I mention energy spreading often, I would again point out that energy doesn't "want" to spread, but the # of possible spread states vastly outnumbers localized states, so once the barriers to spreading are removed the energy can be expected to rapidly+irreversibly spread; that is the most stable (probabilistic) state. All energy flows must be in the diffuse gradient direction. Because energy always seeks the most probable (most diffuse) state, we see that systems cannot achieve more than 50% (?) energy efficiency in an absolute sense. Maybe this is misguided as a well-designed system makes use of the spreading to the greatest extent such that the finally released energy is already in a very diffuse state, thus approaching very high efficiency, depending on how well localized the energy was initially maybe ~100% is achievable.

-||- A-> [efficient system] B-> -____- C-> ------___-------
 || 
vs
-||- A-> [inefficient system] B-> -\  /- C-> ------___-------
 || 								--

During step C, no useful work is done while energy spreading is still occuring. The efficient system extracts lots of work from spreading and the remainder in C is not very much, while the inefficient system only slightly extracts work so lots of potential energy is wasted as dissipation in C. Then it is easy to claim all heaters are 100% efficient! And that all systems are ultimately fancy heaters.

Gravity in this model is also always spreading. Then it is no coincidence gravity is constant *acceleration* - if we had a way to accelerate an object in a gravitational field its energy could increase tremendously and indefinitely, yet the field itself takes no energy to maintain - the gravitational impulses are just emitted by matter, always, as a fact of matter's existence. I have a feeling this will not be possible, but if it were possible to capture these gravitational impulses and localize them/keep from spreading, we would have a truly inexhaustible source of energy -> the impulses are always emitted as localized, we extract useful work by letting them spread/diffuse in a controlled manner, we can be kings of the universe [f33]. Probably the limit here is any uncertainty principle - by binding/localizing one parameter we must have a diffuse dual parameter, so we can bind the energy but lose the particle (ie one-shot extraction) or bind the particle but lose the energy (continuous but zero extraction), never infinite extraction.


[f1] Will the blackbody brightness be the same for all materials? Will the spectrum be the same at the same temperatures?

[f2] in retrospect, I am not sure about this

[f3] this also suggests that a vanishing particle will create oscillations in the g-field at a frequency that is characteristic of the field itself rather than the particle; this would be an interesting though difficult experiment

[f4] later I retract this viewpoint, since emitted electrons are essentially different from emitted photons; electrons always remain particles and do not get converted to waves.

[f5] here note again the seeming flexibility with which electrons take on various and continuous energy levels in the 'free' state - as in accelerators and free-electron laser, as opposed to typical quantization in the atom. So it is the binding to the atom that causes quantization, not nature of electrons intrinsically.

[f6] note that the string's radiation efficiency at the harmonic frequency is actually *low* which causes it to continue oscillating at a *high amplitude* for a long time whereas the other non-harmonic frequencies are rapidly radiated out and thus decay to *low amplitude* in the string; the sound of a singular note from the instrument comes from the slow transfer of the string's oscillation at the harmonic frequency to the body of the instrument which then effectively radiates it into the air; a bare string just cuts through the air and does not make much sound

[f7] more or less quantized: a spring-mass system will still respond to near-resonant excitation, and in QM we see that atoms do as well - but this is explained in a backwards way as some probability integral for absorbing a photon of a different than resonant frequency, leading to energy non-conservation.

[f8] zero referenced to what? we do see in photoelectric effect that applying a voltage reduces the necessary light frequency

[f9] An annoyance from multiple quantum physics lectures I've seen is the discussion of baseball-sized quantum particles as an explanation for why quantum effects don't matter on a macro scale. The problem is that there is no such thing as a baseball-sized quantum particle, instead we have real baseballs made of very many microscopic atoms each of which follows quantum effects, so the effects do not magically disappear.

[f10] even wackier, one could argue that this actually accelerates other objects towards/away from observer, and this is why we see a 1/r^2 dependence of the gravitational/electric potentials

[f11] also Bremsstrahlung radiation/FEL, diffraction/refraction/reflection, escape velocity, atom trap (lasers), SEM electron "size", gravitational waves

[f12] rather, the field constantly updates across all space, but there is a local requirement at particle edges to keep a potential maintained

[f13] the electric field should be "squeezed" by gravitational acceleration then? Is this something we observe, or an asymmetry (which would be inelegant)?

[f14] in this distinction, a close look at how we currently differentiate between 'near-field' and 'far-field' effects should be instructive

[f15] this isn't necessarily the case. the 'near-field' localized particle contains more information than its 'far-field' effects convey.

[f16] and is being considered in some unified models for tangentially related reasons [The Road to Physics]

[f17] it is only an apparent non-conservation, because the excess energy comes from some local influence, if we had full knowledge of the system we would find energy is exactly conserved at each step, not merely statistically

[f18] it should be noted also that by using such 'photons' one is probing the very short time-scale and length-scale behavior of materials, such as beamsplitters and polarizers; this behavior should not be expected to be the same as that with time-averaged large-scale oscillations - perhaps at these scales beamsplitters do not 'split' but instead 'oscillate' between one direction and another. Note that the single-photon nature disappears if merely a low-intensity 'rough' light source is used vs a special 'photon' source, in line with the field view proposed here

[f19] indeed this is not scientific, but there is no way to prove the reality of any intepretation so I only follow this because it makes more intuitive sense to me

[f20] in other words, an electron on the sun oscillates and contributes to the field oscillation; this field oscillation eventually reaches earth which causes an electron on the earth to oscillate. The electron on earth may be measured to have a specific momentum, but most of that momentum information came from thermal interactions with nearby electrons, not from the sun; only the oscillation has been transferred from the sun. This is as opposed to the particle view, in which the earth electron's momentum is perfectly related by the sun electron's momentum, transferred directly by a "photon" particle. In both cases energy and momentum are conserved, but the latter requires tremendously more information transfer, and where should a photon particle emitter keep getting all this information from?

[f21] we can decompose waves into constituent sine functions which can be said to add up to the observed oscillation. However the "universe computer" does not add up individual sine waves, rather the field is always single-valued at each point in space, and behaves as a differential equation like the wave equation (analogous to how a computer solves a finite difference problem). The mathematical requirements for how a field behaves locally can be found by requiring the property that sine waves add up

[f22]
consider a particle in a field, --\_._/--
now the outside field levels drop: _-\_._/-_
this drop propagates towards the particle __     __
the particle maintains field curvature:		\_._/

[f23] even in the classical/accepted model there is a problem at the origin as the 1/r^2 equation goes to infinity; I think the resolution proposed here is more reasonable

[f24] because a field receiver may well not be capable of sending a response, ie listen-only. With valve, control side to valve is 100%, controlled to valve also 100%, but controlled to control 0%. With field, sender to field is 100%, receiver to field also 100%, but sender to receiver 0%. Maybe this is the reason our brains use chemical (particle to particle) signaling in conscious processing, as it seems a field based coupling is a definite consciousness border/boundary.

[f25] I am claiming that our consciousness comes not from neurons but from the complex 3D electric field they set up inside the brain. In fact our consciousness *is* this 3D electric field, and the qualia we can potentially experience is determined by the various possible shapes/arrangements this field can take on as determined by the neuron network connectivity.

[f26] this implies that brain activity can be monitored and even altered using external em detectors - electro- and magneto-encephalography certainly work as sensors, and extermal electric/magnetic stimulation has been used in a very rough manner for general treatment (transcranial magnetic stimulation), but I'm not aware of any em (microwave/mm wave) spectrum emitters or detectors used to sense or modify brain activity. If my field view is accurate, this should be possible with present technology by a 3D antenna arrangement around the brain. This also has interesting implications for brain-brain interactions at very close range (ie head to head - further out the energetically possible radiation is just too low).

[f27] maybe sleep is also required for such re-charging, ie a state in which the system is no longer decoupled so it can be refreshed before isolating again when awake; because it has to be isolated for conscious processing and if it is isolated it can't be recharged. It can't be a coincidence that all animals sleep - it ought to be an evolutionary advantage not to.

[f28] since it is this self-field that the accelerating device must overcome, by its own field:
-\ |--
  \|
  ||
This warped field is *stable* for moving particles, just like a 1/r^2 field is *stable* for stationary particles! Both do not dissipate or require energy to maintain.

[f29] this means the particle cores can directly interact with other particle cores to produce nuclear reactions and create new particles. This is what happens in accelerators like LHC. At low speeds the particles' surrounding fields keep other particles away so nuclear reactions don't happen.

[f30] such a dynamic entity implies information non-conservation; I later revise this viewpoint to claim there is an inherent stability to a 1/r^2 potential

[f31] this goes beyond just 'feeling in control' because my premise is the world is deterministic after all so we can't "really" alter the future, but we can control energy dissipation going into the future and cannot do this going into the past

[f32] because memories have the capacity to be arbitrary, their creation requires dissipating energy; memories cannot be made in a reverse-time world

[f33] perhaps it is this spreading of impulses that is responsible for the 'universe computer' functioning and computation, doing this would be equivalent to turning the laws of physics to our advantage

«« 3. Space and Time    4. Fields and Particles    5. Information Theory »»
Contents