Deterministic World

Part 1. Physics

7. Conscious Systems

In grad school I took a physical metallurgy class, which had a fascinating notes chapter that implied motion of defects in metals can be considered as "life". Earth is bathed in low-entropy energetic photons from the sun, and "poops out" high-entropy low-energy photons into space. A piece of metal that is being bent/deformed gets low-entropy motion from the prime mover and dissipates high-entropy heat in the process of deforming. The defects in the metal piece (vacancies, dislocations, crowdions) move about - interacting with others and self - such as to bring about the high entropy state; their motion is what *defines* how the metal will change shape and they can't help but move so as to maximize entropy. Defects can grow, multiply, and annihilate, forming an "ecosystem". But experiencing "life as a vacancy" one would find that life proceeds so as to increase entropy and in the process move the metal into a new form/shape. To us, operating a machine which bends a metal piece into shape, any such life is not really important as long as it leads to the desired bending. So for a defect there may well be no purpose to life in its terms - it just moves around, dissipates energy, and moves other atoms in the process. But the ultimate "prime mover" energy source bending the metal, the point for bending the metal, and even the nature of the metal, is far beyond the comprehensive grasp of the defect 'lifeforms'. The end result needs them but doesn't really care about them, because all they can do is dissipate energy and increase entropy. This may not be so surprising to apply to our world. Life on earth evolved because of ready availability of high quality energy from the sun (indeed, an excess of it) and has been stable as it is effective at increasing entropy. In all we do, we must increase entropy, and that is the state that the universe "seeks". There is no particular reason for our lives to be "meaningful". This question on the meaning of life always seemed a bit strange to me - even around 10 years old, I had a feeling to say "that's easy - there is no meaning!" but discussions with adults made me realize this is not socially acceptable/everyone treats it as a monumental and deep question, so if I can answer so blithely I must be really naive/missing something. Now having established my values and with scientific support I can be certain enough in this thought to write it down here, and perhaps to start accepting it. As I mentioned earlier, our society has a screwed up idea of death - on the battlefield is honorable (even better if kill others too), of old age/immobile/breathing by machine and all spirit wasted away in a "cold" hospital room - this is the ideal to strive for. I think it is a pathetic sight to have an old-age death like this, it is an ultimate insult to the person that once was. If I am to accept the "meaning of life" has no concern for me, it is easier to come to terms with death - and the joy of being in control, choosing my last day. Dying weak and feeble in a hospital bed, upsetting all immediate family members who have to take time off work (after paying your caregivers for years) - *this* is the ideal, coveted, old-age death? It is absurd. We pretend death is not real, that somehow it is better if it's unplanned and random, even old-age death is basically saying - I'll let some random disease kill me, don't care when don't care how. Like covering the ears and pretending if you don't acknowledge the existence of death it somehow won't happen; it will happen - and you won't be in control, which is probably the least logical way to approach this situation/inevitability.

Our bodies are composed of the same building blocks as inanimate objects, and they follow the same laws of physics as systems we measure in the lab and codify in textbooks. This much - which seems easy to accept given the success of the scientific approach in medicine - says that, at a small enough level, the electrons/atoms in our body follow the deterministic, impersonal, exact laws of physics (indeed we can now computer-simulate drug interactions with the body on an atomic level - as much proof as we are likely to get for this view). Thus the whole of our bodies is also a 'cold' physical system - and this is not really so surprising - looking at other people/animals/bugs/plants, we can scientifically quantify any organism we want, and find that in given conditions it behaves in a given way, just like a complex computer program/robot. What *is* surprising is that I have experience - I can feel pleasure/pain, and be in a certain mood, resulting in a sensation that cannot be put in words/symbols, and based on other proplle's writing and talking it seems they also have experiences - even while all their actions could be described as if/then statements. Every indication tells us that humans follow the laws of physics, so the only conclusion is that this vivid experience/feeling is a *physical law*, just like Newton's F=ma or Einstein's E=mc^2. Our experience of life is not something to be left to the "soft" social sciences/humanities, but is as cold and hard as conservation of energy and entropy laws. This means we can apply scientific analysis to understand what determines consciousness - and I believe we will find our naive view that only humans have feelings/experiences as outdated as flat earth. There are two questions underlying consciousness: first, I feel things - raw senses - in a way I cannot describe which is yet intense and real - for example the color red, a touch on my body, sensation of cold/hot, pain from injury, tastes and smells - all what I will refer to here as "qualia". Why should qualia exist and what defines them physically (for instance photons have frequency and intensity, are qualia similar? What differentiates pain from pleasure? What "creates" or "destroys" qualia? What transmits or transfers them in time?)? Second, the things I feel, all seem to affect a common core - what I call "I" - so there are not just qualia but a "someone" who lives out the experience and actually has the feelings associated with qualia. Who or what does the experiencing (is it a special arrangement of atoms? A closed loop system? Individual electrons?)? How can this entity get input from multiple qualia simultaneously and experience them in a single coherent sense? How can this entity stay stable in time/be created/destroyed?

There are a few important observations I can make by introspection combined with existing science knowledge. First, the experiencing entity "I" seems to be a coherent, single entity that has well-defined boundaries on qualia it receives and what it can control. I can move my hand at will and sense objects that touch it, both carried by unspeakable "move there" qualia and "touch there" qualia. Things like my heart rate or perspiration or digestion I have no control over nor can I feel any associated qualia except indirectly (ie hear my heartbeat or feel pain in digestive tract, but I cannot feel my heartbeat as its own sensation nor feel the food moving about as its own sensation). So consciousness has physical boundaries - the upper limit is temptingly put as all the nerves within my body. Anything going on which doesn't affect the immediate vicinity of my body is of no effect in generating qualia or affecting my life experience. Second, I experience a sense of time and continuity. I can think back to this morning and recall a vague experience of what I felt back then (though this experience is rather dull and muted compared to real-time sense qualia, it nevertheless has that unspeakable semblance to it - it is not purely abstract/linguistic). I feel like the "I" that I recall existing a moment ago did not disappear and a new one created in its place, but rather that I experienced one moment after another after another, in succession and without a marked change in "I" nor in what the qualia feel like to this "I". Yet when I fall asleep I feel this consciousness to completely cease any feeling or qualia-response. I have learned over many years to know what to expect when waking up so I am not surprised by having slept, yet at first this must feel like a mysterious phenomenon to a baby - at some point the world and qualia experiences fade away just to reappear momentarily later at a different place/time. This is indicated by the confusion I and others experience when (ie) falling asleep from exhaustion or from drugs - a hazy memory and spending time figuring out "where/when am I now?" and "what happened when I fell asleep?". So at the same time we have evidence of non-continuity or at least suspended qualia response (and thus consciousness) which can be stopped/started on a regular basis. Third, when qualia happen we cannot choose whether or not to feel them/experience them except in a roundabout way (shutting eyelids or turning off the lights to stop visual qualia). This is phrased as a tautology - since qualia are defined as the things we feel, so unfelt qualia are impossible. Still it is an interesting point that our control, if any, is only indirect - we cannot choose to ignore or block qualia, they must happen and be felt regardless. They cannot be made optional. Even if it's impossible to do anything about a toothache, the conscious "I" feels it continuously (unless blocked chemically or physically - in which case we don't feel the qualia at all). We cannot be aware of the presence of qualia except by directly experiencing it - we can't feel its presence somewhere and then decide whether or not to experience it/act upon it. Fourth, we have no control over qualia - we cannot consciously create them (except if imagination/thought counts as qualia), directly transmit them (the torturer does not feel the pain of the tortured, there seems to be no "conservation of qualia"). We cannot even describe them by any language means, and emotional means like opera/theater are shaky and abstract. Yet while we can't consciously make qualia, the qualia we feel are generated by our own bodies - eyes generate "light feeling" qualia, skin pain receptors generate pain qualia (the 'generate' here should not be taken too literally, all I mean is that if the qualia were to be defined by anything, it would be defined by how our bodies are wired rather than by any objects outside the body). Fifth, qualia seem to be multi-fasceted, not different representations of a similar thing like photon energy. This may be something learned by the brain in first years of life and thus an arbitrary mapping, but I feel I cannot arrange qualia in any logical 1-D order except by the intensity. For instance, I cannot compare a touch qualia to a sound qualia and tell that one goes "before" the other, as if qualia could be defined by a single "frequency" parameter. I can, though, match the intensity of diverse qualia, ie match the pain from cold water to the pain from a loud noise. The qualia feel unique/distinct based on the sense, and I can feel multiple senses ie listening while looking while walking/feeling ground. As the mp3 and jpeg compression show, qualia don't have a 1-to-1 mapping with physical input to the sensory organs, that is different physical inputs can create the same/similar feeling qualia. Sixth, "I" seem to be a single entity - despite the millions of atoms/electrons/cells/amplifiers in the brain that underlie thinking/emotion/information processing. One view is that "I" is thus some elementary particle, but I find this unlikely because there are limits on what information a single particle can receive/output due to its size and localization, and the simultaneous qualia we feel must require at least a number of parallel connections. This means that whatever process is responsible for creating consciousness can unify very large systems into one coherent entity that I experience as myself. As seen with brain injuries and memory loss, the "I" can typically persist and carry on even if parts of the brain don't work - so it is not a super-fragile system where everything has to be just perfect, but rather is adaptive to whatever circumstances are at hand (part of brain not working, falling asleep, taking hallucinogenic drugs) and still feels like the same "I" to the person experiencing it. Mentally impaired people don't have "inferior experience", and are unaware of their deficiencies as feel-able qualia, being only able to understand such differences by indirect/abstract comparison with others. So the process that creates consciousness is not very picky - it doesn't require any exact neural network or atoms arranged just so, and yet what it finds (within the brain) that can support consciousness, it unifies into a single "being".

So we continue to the question of consciousness in systems - taking advantage of the fledgling information theory. The process which I mentioned as different and mysterious is the "upconversion" process - that which occurs in amplifiers/transistors, anything that controls something "big" by changing something "small". What exactly makes the amplifier possible when my theory claims that all interactions must be 2-way, that is information-preserving, so a first guess might be that such control is impossible, just like the momentum of a big thing can only be slightly affected by the momentum of a small thing (linear relation, equal exchange) rather than outright controlled. A train conductor can, in a real sense, stop the train with the push of a button whereas pushing the train would do effectively nothing - this change is enabled by a series of amplifiers very carefully arranged, so the finger pushing the button really feels no/negligible momentum change that the train undertakes - this certainly doesn't sound like 2-way exchange. Amplifiers must be 2-way devices which still create a directionality {the infamous arrow of time}, a notion of 1-way transfer of information rather than exchange. How is this possible? Consider a model of an amplifier as a valve: [f1]

			__
   ________|  |____
  /		 __|  |   /
 /		/ /|  |	 /   <- ->
/		--		/
----------------
		   |  |
		    --

The control piece can move horizontally left/right and is a flat sheet with a hole that blocks or enables flow through a perpendicular vertical length of pipe. If the assymetry is problematic, there could be imagined a secong hole and length of pipe arranged such that the flow area remains constant (one closes, the other opens); however this does not affect the findings or the effect here. In the ideal amplifier, the control sheet can move to cut the flow (horizontally) with no resistance: it is 100% coupled to whatever is desired to be amplified, and fully takes on its (horizontal) position. The exchange there is 2-way, nothing unusual (in a real system, there may be required a certain degree of non-ideal coupling such as a bare minimum to transfer classical information, but this will not be discussed for now as I am still developing the concept). The amplifier is designed such that vertical motion of the sheet is not allowed; the vertical motion is 100% coupled to the amplifier structure which stays fixed, also nothing unusual here. Any particles flowing in the pipe that hit the sheet and are pushed back vertically also become 100% coupled to the structure. The system holding the amplifier is assumed to be designed such that the particle flow cannot couple to sheet horizontal motion (this may not be really plausible - minimal coupling required (see above), but we assume possible for now) - or all particles can only move vertically. If the particles couple horizontally, the 'controller' will be affected and this cannot be called ideal amplification. As for the particles that do not hit the sheet - passing through the hole - they have 0% coupling with the sheet. So actually there is no real information coupling between controller and flow - the flow cannot affect the sheet's horizontal motion in any way, on the other hand, the controller moving the sheet cannot learn anything about the flow (or lack thereof). Whatever happens in the flow is of no consequence to the controller - eerily similar to quantum mechanical "black boxing", as the amplifier affects the flow in a mathematically consistent way but unless one already knows what the flow "should" be there is no way to actually solve for it. Just like trying to push the gas pedal on a car that is powered off - the controller has no direct coupling to the controlled system, and such lack of coupling is required for any amplification notion. We are clever enough to figure out indirect indicators of the flow and then couple them to the controller - as will be outlined later, this is a first step towards consciousness.

It is easy enough to see the lack of coupling between flow->controller, but how is one to say there is no coupling between controller->flow? Isn't that the whole point of an amplifier, to make the flow directed by the controller? Yet paradoxically we must conclude no such coupling exists either - the vertical flow is either 100% coupled to the amplifier structure via sheet or 0% coupled through hole, both of which preserve information/energy (in a similar way as PWM avoids dissipative losses by alternating between 0% coupling (allow full conduction in transistor) and 100% coupling (allow no conduction in transistor), anything non-ideal leads to finite controller-flow coupling and energy dissipation). [f2] None of the flow is coupled directly to horizontal sheet motion. And this may well require a specially designed flow for said amplifier - just like transistors can't handle an overvoltage and the pipe amplifier here can't handle pressurized flows which would push the sheet horizontally. A successful amplifier thus must be part of a bigger *system*. Still there is no doubt that the controller affects the flow and flow doesn't affect controller, so do we have 1-way information transfer? No, as what is transferred is a (arbitrary) relation - when the controller does this, the flow will see more or less of 100% vs 0% coupling, both of which are energetically equivalent so changing between them takes no energy. There is no elementary information transferred, but to us who can observe the flow and judge its different *potential* paths for evolution (ie compare the 0% coupling-full flow, and 100% coupling-no flow, cases would look like far in the future) we see a clear impact of the controller. The controller pickes one among energetically equivalent and potentially infinite paths for time-evolution of the flow. The controller has no feedback from the flow [f3], even though external beings like us can see the effects of the controller on the flow and adjust it accordingly - this is an indirect coupling, not "elementary". For time-evolution of the flow to occur at all, there must be an energy/entropy source active, but again the controller has no idea whether this is the case. This requirement is the reason computers use finite power - combined with non-idealities in flow resulting in heat generation. As for the flow, it is effectively indifferent to the valve's presence: it evolves in time by whatever path is available while the controller picks the path, the flow just follows the rules of the game, and if the rules change the flow will change - *we* know this is because of the controller's action and motion of the sheet in the valve - but the flow doesn't know and doesn't care. *We* can tell the flow carries "information" about the valve position because we are observant and have memory, so we get the illusion of 1-way information transfer, even when none occurs. As a physical system, the flow evolves as it always would, no elementary information is coupled (at the *cost* of requiring some external knowledge about the flow before being able to use it as high-level "information" by us observers - ie the pressure and flow rate corresponding to 0% and 100% respectively - if these change such as by a pump failing our amplified readings will be invalid). [f4] So the amplifier's controller doesn't know about the flow [f5]. What happens if we couple the amplifier's flow to the controller? We will get a distinctively non-linear evolution: depending on what the flow is initially, it may cause oscillations, reach a stable value, or undergo a more complex process. As now it seems the closest thing we have to a "brain" is a CPU [f6] - designed of countless transistors connected to each other forming a network - I will claim that such interconnected amplifiers are the basic substance of our own brains. And seeing as our brains support consciousness, I argue that such networks give rise to consciousness and define it. Consciousness is not any single amplifier nor any of the flows, but the resulting "quantum" structure that forms when an "unknown" flow of an amplifier is eventually coupled to its own controller, as in such a case evolution of the system is on the one hand ill-defined/mysterious and on the other strictly limited to specific states/quantized (admittedly a very poor argument, but hopefully bolstered by subsequent comparisons). In such a system, evolution is decidedly non-linear and discrete possibilities/chaotic systems/attractors can be set up. There can be many external inputs into this system which change the course of evolution in a non-linear and self-affecting way (as opposed to a linear effect: open-loop, once-through, decaying), and such inputs will be considered as qualia "felt" by the conscious system. [f7]

Thus qualia felt can be of any wide variety depending on the complexity and structure of the system, and yet are limited in amount and extent, and affecting the system in different ways. And, for time evolution of the system to occur, finite energy/entropy is required for the amplifiers to operate; inanimate objects at thermal equilibrium are then conclusively non-conscious. [f8] Referring back to the similarity between an amplifier and quantum observables not being observed, perhaps the network of amplifiers becomes in essence a system capable of hosting an "unobservable" (by specific amplifier design, not accidentally) quantum pattern that is a big "elementary particle" which is the unitary consciousness we experience. If an amplifier is written as a=f(a,b) where a is the flow and b is the controller, but then many amplifiers are hooked up such that eventually we reach b=f(a), we then have a completely solvable system of equations but nothing to observe or couple to the solution - this "isolated self-interacting" system is then consciousness. We can identify a potential "consciousness" and its boundaries in experiment by observing what changes lead to a non-linear impact as opposed to a linear/classical effect (decaying vs non-decaying). Perhaps we can call control theory "consciousness design" [f9] seeing the use of self-controlling amplifier systems this entails. Consider two simple systems: one inspired by the mechanical computer in MIT Stata Center:

-----------
|   /o o o|
|    -----|
|  /\     |
| \  /\   |
|\ /\ /\  |
|/\ /\ /\ |
|________O|

Balls fall down and hit a "slider" which changes direction each time, and when the ball gets to the bottom it hits a lever releasing the next ball; and one everyday control system: the toilet flush tank/valve where low water makes the valve open and then close when the tank is full. Are either of these mechanical systems conscious? They both can reach an equilibrium state, and in such a state I believe they are "dead" (ie not conscious), not supportive of a "conscious particle/construct". Yet the ball machine shows a feature I outlined as a requisite: its time-evolution will affect its own time-evolution in the future. The balls act as a source of energy/entropy falling from the same location at the top, and in any case they end up in the same location at the bottom - the perturbation of the balls is linear or less (entropy-generating) so the balls are not part of consciousness (except perhaps when they are already en route to hit the slider and haven't yet hit the "bifurcation point" at the bottom where their trajectories all combine and lead to a single location) just as we don't consider the electrons/ions/hormones that are a critical part of the brain functioning as actually being conscious or "bits of consciousness".

Switching brains (or bodies perhaps), while an interesting movie plot, cannot happen because the structure of the brain itself along with memories is what determines the consciousness carried within; if at bedtime consciousness "disappears" or even dies as the loop is opened, the new consciousness that arises in the morning is indistinguishable - I still feel like the "I" of the past was the same "I" as I am now - so the process leading to the rise of consciousness must be consistent and indifferent (so if "I" were re-incarnated as a frog, I would feel and think like a frog, with no memories of being a human. But this is the same ontological debate as saying all electrons are equivalent/indistinguishable, and doesn't really accomplish much. All we can prove is that consciousness depends wholly on memories and brain structure to give any experimentally verifiable data on itself). When the system is not set up with balls as an energy source, any perturbation on the slider position will be standard linear, yet with balls the perturbation will cause a sudden nonlinear flip between one phase or another right at the point the slider crosses the midpoint of its travel range. Such a change from one state of evolution to another is what I believe produces a "qualia", and here perhaps even a transient consciousness that "feels" it. So, the basic unit of consciousness, and the simplest conscious system, is an oscillator - unstable, it alternates between two states indefinitely, with a qualia at each discontinuous transition. In the ball machine, I can hold a ball in place for a long time - throughout this time the rest of the machine is in thermal equilibrium - dead - so the consciousness in this system must be of a transient nature. Same applies for the toilet flusher - it only changes states nonlinearly twice in a flush, with no activity in between, being "transiently conscious", a mode of being that I believe is alien to us - from all I can tell my experiences of wakefulness are continuous, even though it is believable that in sleep the consciousness "shuts down" while the brain reconfigures itself. This continuity cannot be explained simply as a "time delay" of qualia (like an after-image of a bright light) because (1) any such time delay, as well as time experience (speed of time) seen by consciousness must have a physical basis and (2) the afterglow of light is only seen because our receptors are slow vs our ability to process light levels in the eye, but what are we to compare consciousness speed with, except for itself? (As mentioned above, consciousness likely has to supply its own "clock" or time basis). A computer CPU is also a "transient consciousness", at each tick of the external clock quickly configuring itself into a new equilibrium state (this change being non-linear and creating qualia) but then "dying" as the system reaches equilibrium. [f10] In the brain, the "loop" is closed so the system doesn't reach equilibrium, and we experience a continuous life. Yet using this perturbative approach to determine what element of a system can be considered conscious (as the one whose motion/state affects its future state), while effective for the ball machine case (even down to selecting atoms in the slider and associated bearing/pin, vs ball, vs base board) requires an explicit time axis and bidirectional time exchange of information - both of which I refuted to try and make sense of the moment-to-moment time evolution we observe. The only way this allows for defining consciousness is then physical-based: it must be distinguishable from a "snapshot" in time where consciousness can arise and what it will feel. Consider the interaction between a conscious human and other potentially conscious systems (even other humans): as far as we experience, no "joint consciousness" occurs, even with lovers/close friends, and less with machines and equipment (such as excavators, computers, cars). While we learn how to effectively interact with other systems, such learning does not change our qualia experiences nor our sense of self. [f11] This is interesting as it places some requirements on what sort of coupling a consciousness requires to exist. Interactions and information transfer through qualia/senses does not join distinct consciousnesses; their coupling must be more intimate, perhaps even on a basic information level such that the coupling does not "collapse" on being observed like the qualia do.

If consciousness really involves a self reinforcing control loop, then what happens when we let consciousness interact with another control loop, such as a video game or another person? From all I can feel it seems that I still experience conscious existence as my localized self; I still feel "inside" my brain and I still feel the same types of sensory inputs (ie interacting with a car uses my known/learned motions of the body and is not "implicit" like moving my arm/leg), only higher-level abstract features of the qualia (ie appearance of the road at given orientation vs color 'red' or 'green') are affected by the external control loop. Which suggests that consciousness must be coupled directly at least on an electrical level (?) thought this seems too arbitrary. Taking our muscles as amplifiers and car controls as more amplifiers which eventually couple through the eyes/motion senses to the driver, why shouldn't we see this system as a new consciousness? Based on my own experiences there can be 'sections' of the brain switched in/out of conscious experience such as when falling asleep/waking/various drugs, so such expanding or contracting of the extent of consciousness must be possible at some level. As a kid, I remember playing many hours on my Gameboy, which was quite addicting and as my parents say made me withdrawn/angry when unable to play continuously. Could this be becase as children we don't have as strict boundaries on how to impact our consciousness so we are willing to "become" the game - which is addicting by its virtue of being simple, closed (completely defined by known rules), and fun (providing new challenges that are timely and skill-appropriate)? Or perhaps this is an illusion, the addiction stemming from the brain's aversion to boredom (especially in childhood when we crave experience/learning) and as a kid I was willing to invest in the game as an escape. This sounds more realistic/amenable as it doesn't include the mystical consciousness expanding, and also explains the popularity/addiction of non-interactive shows/cartoons/films. Yet something as simple as a piece of paper (this one!) can complete a feedback loop externally: I write something, and then read it later (or even in real-time) and adjust my thoughts and actions accordingly, from something as simple as keeping the script in a coherent line to something more complex like establishing and elaborating ideas by writing down many complex facets, sequentially. In this sense can even works of art/craft be considered an extension of consciousness? After all, one hears the phrase "that painting has a piece of my soul in it", or "when those photos were lost, I lost some part of myself" and perhaps this isn't entirely a metaphor. There are some things we really cannot remember or think of without a "key" which may well be an object that is a work of art (this is discussed later as the hash table model), and if that object is lost there is a real sense those memories are also lost to inaccessibility. Does it make any sense to say that this constitutes a reduction of consciousness? Or that, making such works (recordings) is an expansion of consciousness? For now it seems most intuitive to say there are clear boundaries to consciousness and qualia experience, but where are these boundaries and what is their nature? We make scientific instruments and devices that must inevitably connect to us via an e/m realm we experience. If the data from the instrument comes at the expense of potential other inputs of the senses (ie reading a monitor means that much of the visual field cannot be used as vision per se) then the interaction must be an augmentation rather than expansions of consciousness, and this is in line with what I feel from introspection. Expansion of consciousness would require keeping all I experience and then experiencing even more on top of it, metaphorically 'adding another dimension' not just modifying the existing ones. Since all our body sensors are already wired into the brain in preset ways, any bodily experience and interaction through these means will not change the scope of conscious experience. But factors like education/maturation, self-reflection, drugs, mental conditioning, and brain implants, can indeed modify consciousness and this remains consistent with introspection.

Earlier I wrote of consciousness as somehow being related to "amplifier loops" where the overall state of the looped system becomes able to evolve in a complex fashion. Then pretty much everything could be seen as conscious: computer chips, molecules, mechanical machines; does such broad applicability make any sense? If I look at something and interact with it, is a new consciousness created for just that moment? I want to try to be more specific about my own consciousness first. I mentioned earlier that electrons/ions in the brain are all transient and thus not part of consciousness. Now I have rethought this point. Consciousness is clearly spatially bounded, so we ought to be able to outline the limits of consciousness. I propose this definition: if the change or absence of a certain spatial element causes a change of conscious experience, then that element must be a part of consciousness. But this is still too broad: it means things that I see all are a part of my experience and thus part of consciousness, so everything in the world (that will affect past/present/future things I see) is also consciousness. Maybe there is some wisdom in this view, but it is too broad for me to accept yet. So I will narrow it to: the elements the change or absence of which makes consciousness impossible (ie "conscious actions"). I will arbitrarily draw a boundary around the head: changing our memories will change our responses, changing stimuli that enter the brain will also change responses, and to keep stimuli from becoming an extension of consciousness I will take them as being outside the brain. Things that we have no capability to see/feel (like neutrinos) then really are not part of consciousness. What about oxygen? Removing the oxygen in the brain would lead to quick death, but moments before death the brain would still be alive and conscious, running down remaining ATP reserves. So oxygen is necessary for not dying, but it is not necessary for consciousness to function. In a similar sense, during a beheading the head remains alive after being chopped off - the spinal cord connections to the rest of the body are useful but not an integral part of consciousness. But the electrons and ions of the brain - even though transient and eventually made to participate in many reactions that change their state - really are integral to consciousness. If they are somehow removed or stopped the consciousness immediately ceases to exist/dies. Similarly integral are the complex links and networks in the brain - if disrupted, the consciousness immediately changes or dies. So what determines what we feel? I believe it is this (genetically and environmentally defined) structure. Children raised in isolation and who do not learn language [as the case of 'Genie'] cannot have a feeling of enjoying a good book, because their brain does not have the proper structure for such feelings due to poor development. Similarly things that every human feels - like sensory inputs (touch, color, sound frequencies, temperature) feel exactly the way they do because our brain structure is genetically designed to support such feelings. Some people have a reduced sensitivity to some stimuli, and it should not be surprising that such differences are *genetic* in origin. Conscious systems have feelings corresponding to their complexity and structure: we process lots of information in different formats (video, audio, sensory, motor, taste, odor) which means whatever 'loop' underlies our consciousness must be very complex and able to accept many independent inputs in choosing its course of action/evolution. [f12] The loop structure is encoded genetically so we can expect that even blind people have spatial memory like people with sight, just no visual inputs into this part of the loop, so they might have random or sporadic "visions" which to them will be a normal occurrence. Similarly, amputees or people born without limbs or paralyzed/spinal cord injuries, people will still have a "body model" in their brain and in the absence of actual data from the body to fill the space in that model may feel phantom pain/itching/senses. This is to say, the brain determines what we feel and how we feel, its structure determines the "spectrum" of feeling we can experience, and this is independent of the body or any "sensors" we have available.

While we mentally represent feelings as properties of objects or of our body (taste of chocolate - originating from chocolate, pain in the wrist - originating from the wrist) really all feelings "originate" from our brain structure - the feeling of tasty chocolate and our ability to localize it to the tongue comes from a particular arrangement of the brain's amplifier loop (chosen by evolution becaue the feeling's effects led to increased survival), and similarly the pain in the wrist and our ability to localize it comes from some other aspect of the brain's structure. There are feelings we can never experience, just like there are colors we will never see - some things lie beyond the capabilities of how we are made, and this in turn defines what it is that we do feel, and how it works. So if there are no specific laws I can uncover, there is at present no chance of me ever understanding how a computer chip "feels", just like I can't understand how infrared looks or how ultrasound sounds. This is because I cannot *simulate* a computer chip (maybe?). Still I hope there is some order to this, so I can understand whether the spectrum of feelings is truly unbounded or has well defined characteristics that allow ordering. An easy first distinction might be painful vs non-painful feelings - it would be comforting to know that our electronics aren't screaming out in pain, though I have a feeling it's a 50/50 chance as we don't even accept that animals (or sometimes even other humans) can have feelings just like us. But what is pain? It is another feeling, like seeing the color red, except unlike seeing color red, it can be localized to our body (instead of the 3D visual world model where colors "exist" and are felt), and it is distinctly unpleasant. Pain changes the operation of the whole brain in trying to find solutions to avoid pain, in a way that seeing a color does not, so pain must be an input that has a big effect on the consciousness loop whereas colors are only a small input that makes local changes.

[...]->[...]->[...]->[...]->   (consciousness loop schematic)
  |		 |		|	   |\
  ----------|----------- \\\\\\\\\\\\\\\\\\\\|
           Pain (influences all steps)		Color (influences just one of the steps)

But not every big-effect input is painful. Pleasure and anesthetics can also cause big scale changes in brain operation, but do not cause a painful feeling. I think pain is not a physical necessity, it is just another feeling in the realm of all possible feelings, and nature happened to stumble upon it in the evolutionary process as it helped the organism learn to avoid painful stimuli - a valuable mechanism for optimal adaptation to the external environment, as there are undoubtedly things and actions that an intelligent organism should avoid - like eating itself.

There is still the question of what is it that feels the feelings - just because a brain is structured in a given way, why should we expect that a conscious loop "feels" anything? Who is doing the feeling? I believe the loop itself does the feeling, it "feels itself". Every iteration of the loop gives rise to a feeling that represents and includes all the information that the loop processes/uses to decide on a course of action/evolution. This is fully in agreement with [Dehaene]: conscious feeling occurs only when a large and complex "tree" in the brain gets activated, forming loops and interacting systems. In sleep and anesthesia stimuli decay and no loops are formed. There is no "I" that is doing my feeling for me, rather with every thought I have and at every instance of conscious awareness, my brain forms complex patterns that encode *all* I feel in that time - including a sense of continuity/who I am, a sense of the 3D world around me, a sense of my body position, a sense of what I am thinking right now, a sense of audio/music (actual+mental), a sense of smell/taste/balance/temperature/hunger and everything else I feel. There is an error of speaking of distinct qualia, say of sight vs taste vs hearing, because one then has to ask who feels all these different qualia at once, where is the "I" that serves as a unifying agent? At a given time there is only a single "qualia" defined by the brain structure, and this includes our thoughts and sight and taste and hearing and all we feel in a given moment, including the mystifying question of "where is this I"? The distinction between "I" and "feelings" is an artificial one - truly the "I" is the brain itself but I can never feel what that is like, though I might be able to look at it on a computer screen. Even though I am defined by my brain structure, I can only feel the feelings enabled by my brain structure, and feeling the structure itself is not possible. I remain ignorant of the intricate mechanisms of the brain even though I really am most affected by them. Each waking moment a loop springs into existence in my brain - the loop is designed to remember its past, to process different varieties of input information in different ways, and to react or update some internal information/"thoughts" and it has access to memories and muscle hardware as required for such actions. And when this complex loop springs into existence, it ends up creating a feeling that it itself feels, before once again decaying away. "It feels itself" - the very essence of a loop, that its state is passed to itself. We split feelings into touch/sight/audio and more, but really there is just one big compound "feeling variable" that has our whole body state+memories+thoughts+emotions all in one, and it is this big feeling that we feel at each instance of the loop's activation. This implies the feeling "spectrum" is very complex indeed, yet I still retain hope for order at least on the notion of spatial positioning we can associate with feelings, and generally spectrum-like nature of feelings (yet color doesn't seem to be a spectrum but a cycle in its appearance, so really I'm not sure at this point). Maybe there is some clue in the length of words - same across all cultures (though technically we could speak/write continuously, we make natural divisions of information into words and phrases - any continuous stimuli are eventually filtered as noise) and I find that I end up thinking one word (or phrase) at a time - perhaps this is the time scale for a single "feeling loop" to exist and process its information. {this places the loop lifetime at some 10Hz - this is how fast we can consciously process information} It then passes the info in a highly codified form to the next loop instance to process/understand sentences. Each loop's existence is intended to conclude with a course of action (including mental action). So the loop feels itself as it settles on the thing it should do, given it feels the way that it feels.

Colin McGinn in his [book on Consciousness] raises an interesting point: are there problems whose solution is beyond our reach? Are there fundamental limitations to what we can solve? I think our intelligence is not just "more smart" than that of monkeys, but has an ability to create machines that solve our problems - for instance Turing machines and math (which makes us the Turing machines). We can make a computer to solve problems far beyond an individual's grasp, and by doing this and defining the problem purposefully, I believe one can say that we do indeed solve the problem. So our mind is able to reach quite far, coupled with language/math to record data and computers to evolve/transform data. Is the reach theoretically infinite, or are there definite hard limitations on what we can solve? There are definitely limitations on what we can process - these are also a side effect of the conscious finite memory construct I mentioned above - we can only feel and think the things that our brain is capable of feeling and thinking by virtue of how complex the consciousness variable is. So, in the moment I can feel vibrant sensations of pain/pleasure/stimulation, but looking back these sensations are dull, abstract - why? It is because in our conscious memory, there is a reserved 'feeling' space for current emotions/feelings, but only an abstract space for reasoning and reflection on the past. The state that defines the conscious loop, the information that it actually represents, makes current feelings stored as patterns that cause feelings, but memories as patterns that cause thoughts or other qualia that are not as intense. Evolution has made it so I do not have control over the feeling state, my brain always sets that to be the current (in the moment) body state, although there is no fundamental reason why I shouldn't be able to control my feelings (ie try to set my own pain level) if my brain allowed this possibility, the same way it allows me to control the words I think of and ideas I choose to focus on/pursue. {Drugs can help alter these abilities temporarily, leading to "spiritual experiences".} So then there is also a limitation in terms of how much I can know about myself. As an illustration:

[Body State][Memories of Self][3D visual space][Audio][Emotions, drives, goals][Abstract thoughts]

Imagine the above is the total information content of the conscious loop - it defines what it is I feel every moment that this information gets processed by my brain circuit. It defines the "I" that feels as well, the existence of that information transiently in neuron firing creates a feeling that feels like that (to itself); [f13] because of the 3D space within that feeling, I imagine that "I" am localized within my brain/body, but really this feeling that feels itself has no intrinsic location, it only feels that way because it itself has the 3D space info arranged in an appropriate pattern that evokes the feeling of localization. Simple looped systems like amplifiers do not have such 3D information, the feelings they generate are very primal urges to change, something I don't really have the ability to understand (I don't think). So, this feeling can feel itself while being processed, but to what extent can it consciously analyze itself? As alluded in the above diagram, to consciously process my own experience I must somehow codify all my real state into the tiny part that represents conscious thoughts:

[Body State][Memories of Self][3D visual space][Audio][Emotions, drives, goals][Abstract thoughts] (time t0)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
			->		->		->		->		->		->		->		->		-> [Abstract thoughts] (time t1)

The left is all I feel in a moment t0. To analyze it at a later time t1 I must represent all that information in the smaller abstract thought space, of necessity losing ability to fully self-reflect {again, because if I could then I could overcome the universal computing limit}. The moment to moment feelings (including abstract thoughts) are the real information content of me, but all I can describe to others or to myself is the tiny part that I manage to get into the abstract reasoning space of my own mental state. Thus we have the inherent inability to explain or codify or analyze or relive feelings - even our own ones.

What would it mean to be a self that is made of a "self-feeling" brain pattern? This means that the sense of permanence that I feel - that I continue to be me - is illusory. The feeling of me being me, of having a permanent thing called "me" that gets modified and takes actions, is a consequence of the brain pattern structure. This structure always includes a feeling of my present body state, mental state, as well as personality and interests and mood and preferences and historical experience and beliefs - ways in which I define myself and the reason I feel I am now the same (or similar) person as I was when beginning to write this - not just the same physical situation but the same conscious/emotional being just at a later time. In reality there is no solid "me", it is a transient feeling that takes energy to keep re-creating and which gets slowly modified by the environment in small ways I don't even really notice from moment to moment (because in between these moments I don't exist - I first feel myself as "A" and next feel myself as "A+B" and unless I have some way of externally finding the B - for example by writing down my thoughts and experiences and comparing at a later time - I won't have any way to measure the B and certainly won't feel the change {this is why animals, without language, cannot be self-aware in the sense of understanding their own actions and growth}). There's no particular theoretical issue with this, though it is certainly non-intuitive (earlier I wrote that "transient life" must be utterly different from us experience-wise simply because I feel like there is a definite solid "me" that retains itself from moment to moment, and other body feelings are inputs to this "me" who has the task of feeling them and responding). But then the question arises: why is it that I feel as myself, no more or less? Why is it that my short term memory is limited to maybe a phone number's worth of info? Why is it that I don't control my heart rate? Why can't I feel the 'experience' of a muscle, or the constant motion of my eyeballs, or the complex visual processing that my brain does subconsciously? There must be some hard physical limits that define what a consciousness is, that determine its boundaries of existence in space and time. I will focus on spatial boundaries first: it seems logical to say consciousness is within the brain, but it certainly isn't the whole brain, even though it is electrically linked to (more or less) the whole brain. Information theory says info exchanges are bidirectional and coupling, but surely there must be some special exchanges that serve to separate the conscious experience from the rest of the brain, body, and world. Not all exchanges are made equal, apparently, for otherwise I should argue that the whole world is part of my conscious being for I am able to see and interact with it. And maybe there is merit in this, but I certainly don't feel another's pain or have the experience of being any object other than myself - I interact with others mostly by sight and muscle activation while I myself can feel much more vibrant experiences that are private to me and inexplicable to others except in the same abstract language that I use to represent interactions with external systems. If I controlled a robot, Avatar style, the feelings from interacting with the world through the robot would be my own, not the robot's. Would the robot have its own feelings/experience? I don't know but I certainly can't feel it because it is solidly decoupled from my vivid conscious qualia experience - even though we interact, this interaction does not create an increase in my conscious experience, at least it doesn't seem very vibrant or vivid. My experience is limited in extent - like a fixed length byte array - with space for visual, audio, smell, taste, bodily actual-moment feelings, then all my personality traits/memories/preferences that define me feeling like I am still myself, then only a tiny small space left out for abstract thoughts like words/numbers/actions that don't really feel very vivid but are still qualia that are easily differentiated but just neutral/bland (perhaps necessarily so, in order to allow rational processing of different arbitrary symbols without the qualia experiential quality causing undue influence). As I write this I have the experience of the next word/phrase coming to fruition within my mind, a decidedly neutral qualia similar to remembering something someone said to me/mentally talking to myself. Neutral, but still a distinctive qualia of "hearing" the word in my mind. Then I get some qualia of my hand's movements and the appearance of the ink after my pen, but I have no feeling or qualia as to how exactly the muscles in my hand move (or should move) - even the connections to keep me writing on the same line using optical input from my eyes happes automatically without an associated qualia (or only a very vague one of continuing to write on the same line - I can choose to actively deviate from this if I want). If I had to consciously control a hand model with strings to pull to move "muscles", I would never be able to recreate my writing motions because even though I write regularly all the actual motions of writing really happen automatically, with maybe some vague and completely bland/neutral, barely perceptible, qualia of "I am currently writing this word/letter" that co-exists in my mind along with the actual writing, but *never* the individual muscle motions or extents of force required, the latter being automatically determined with regular feedback by my brain but with literally no awareness by me. And yet kids must learn writing consciously, so there must be mechanisms by which repeated actions become engrained to the point where they can be repeated *without* conscious qualia experienced. Thus in the process of learning, conscious qualia-felt actions shift over to unconscious qualia - unfelt actions which can nonetheless be called up by virtue of other inexplicable but very bland/neutral but distinguishable qualia - for instance the will for me to move my leg up is a sort of experience that doesn't actively feel like anything and yet is definitely different from the will to move my arm up (of course, because otherwise I couldn't control them independently) - so they are different, and yet I can't describe how, they just feel different even though both don't really feel like anything at all (compared to, say, visual qualia or pain or pleasure, these 'wills' are neutral but localized to the body). Similarly I can consciously will my eyelids to close, and if I do my visual experience becomes muted. I argued earlier that the visual experience is already highly processed - when tired I misinterpret things, generally lowering complexity/seeing more familiar shapes, but it cannot be wholly processed for then we would never be able to see/learn new things. I believe it is processed to stabilize/focus automatically (then again, maybe this is early "training" in effect, but no doubt some features like edge detection and tracking are "hardwired" even in babies), and this is what we see in the visual experience variable, with an interpretation layer (ie tree there, sign there, building/person/dog) overlayed by unconscious but trained/trainable mechanisms, which find unfamiliar objects and make us wary of them so we can learn more - we cannot be "blind" to new objects for then we could never learn. This interpretation is another variable feeling under the visual field but certainly much less vibrant and more abstract/neutral while still having a spatial arrangement. When I close my eyes, I don't see an empty room or feel floating in empty space, although that is theoretically what I "ought" to see. I still have a virtual model of where I am in my room, and this qualia is combined with visual input but still exists in the absence thereof, separately and abstractly. I have an expectation of what I will see if I open my eyes again, and I will be surprised/scared if the expectation is not correct. The absence of visual qualia when my eyes are closed is not like a spectrum flattening, but more like a collapse to a point - I don't visualize a blank space surrounding me, but just feel like I'm not seeing anything. I can imagine the room around me, which uses similar visual qualia but is an extremely minimal/limited/bland way but yet different from the qualia of what I expect to see when I open my eyes again. And still, actually seeing the room with my eyes is the "real thing", imagining it is not, seeing a photo is kind of close, seeing a full 360-deg projection is really close, almost indistinguishable. So what it takes to achieve that visual qualia/experience is optical/photon input of the right type. We can control this aspect of our actual experience by external photon input. Same with audio, and really all other senses but they prove more challenging due to their spatial distribution. I can more simply affect this by closing my eyes.

And here we have a little loop - I close my eyes, this makes my visual experience change, this change affects whether I open my eyes again. All of these proceed in an "amplified" way, so is this a consciousness? Have I sprung up/fragmented a separate conscious being from my own conscious experience? I can choose to close my eyes when I see light and open them when I don't, creating the oscillator - the simplest consciousness if my earlier writings are to be believed. But I don't *feel* like an oscillator is doing this - I feel like "me" but undertaking arbitrary actions, one after another, step by step but not at once as a whole. There is no unique qualia when I do this, it is just a completely predictable variation on things I have already felt before and can expect. Maybe if I do this enough, the feat will become automatic, like seeing upside-down, and soon enough I won't even notice the familiar qualia that I notice explicitly now. Yet I don't feel like I've expanded or changed my conscious experience by creating this loop, so maybe consciousness isn't as simple as a loop - again if it were then it should extend to the whole world but it is firmly localized in my brain. The feelings it gets are made by other parts of the brain which process data from the sensors (eyes, ears), and yet the feelings are *not* the actual data processing but very specifically limited to the actual data itself. How? The conscious experience must be bound by special "isolating" links/exchanges, which can transfer information but *not* qualia. Thus qualia are spatially bound even while information can be globally exchanged. This is not a problem theoretically - even information can be locked/localized indefinitely with the proper barriers (ie nuclear decay), the crucial point here is that consciousness takes in and puts out information, while keeping the qualia strictly localized but yet able to interact with the information when the information is spatially available. The information itself (ie photons, air vibrations, electrical impulses) has little relevance to consciousness and incites qualia only through special processing and change into localized neuronal firing, and even this information is not necessarily or strictly related to the qualia that the consciousness ends up feeling. Where is the boundary between qualia and information, between felt and unfelt, experience and absence thereof? [Dehaene] explored the temporal bound, but there must be a spatial one as well.

To try and approach this consider the following: my consciousness does not distinguish (in the ideal case) between 'actually' seeing something and having a super high quality hologram/projection beamed in my eye - the qualia and experience are the same, because the photon fields are the same. Therefore we are still at an information level and not a qualia level. Next going to the optical nerve from eye to brain, I imaging I could make a device that perfectly simulates the eye signals, so this is still information. Next there is some visual processing/shape recognition that happens unconsciously and sends signals all over the brain. An appropriate link into this section can still reliably re-create qualia so it is still information. When does it become qualia, when is this barrier penetrated? I argue this will be when the same signal does not produce the same qualia, ie when bidirectional exchange actively changes the signal - at this point we are in the realm of consciousness. If we continue to recreate these states, we won't be just making an illusion of a vision for consciousness - we will be modifying the consciousness itself, making it relive the same moment over and over or otherwise significantly changing the nature of its qualia (psychedelics). Experimentally, this should be seen as a noticeable and significant/massive increase in information input compared to the photon field in order to recreate the same qualia - some of that information will include other conscious qualia then. In terms of information theory this actually makes sense - a 1-1 mapping from photon field to optical nerve signal (for instance) means an "infinite" or "one-way" coupling exists along the way, and so long as we follow one-way couplings we cannot be in a consciousness "loop" which must use bidirectional exchanges. And what better boundary for qualia than an infinite coupling - as shown earlier this really *decouples* information while still letting one system affect the evolution of another, so this is perfect to keep qualia contained. Consider:

[Photon field] (1to1 map) -> [Eyeball receptors] (1to1 map) -> [Optic nerve] -> [unconscious processing] ->
--------------------------------------------------------------------------------------------
| [Desires] <-> [Memories] <-> [Consciousness Loop] <-> [Processing] <-> [Other responses] | conscious/qualia barrier, exchanges are not 1to1 but affected by system state
--------------------------------------------------------------------------------------------
 -> [Muscle outputs]

I imagine it like moving walls in a wave pool [https://www.youtube.com/watch?v=WffR6HrEqTA]. Within normal operating conditions, the walls are programmed to move effectively independent of what the water is doing. I can move the walls by hand or by mechanical means and achieve the same effect so this is still 1to1 mapping. But the water motion itself becomes very complicated even right next to the wall, because it depends not only on the wall motion but also on other walls' earlier motions and motion of water at present and motion/level of water nearby and probably more. Recreating that state is much more difficult than just moving a single wall. This fluid, oscillating, interacting, dynamic state is then the consciousness hidden behind the "one-way" barrier, and able to act on "one-way" barriers of its own to output actions. A critic will say - if the motion of the pool here is complex and affected by other walls, then I could argue that the processing of one-half of the eye's receptors is also affected by the other half in a complex way, so this is also conscious - while the processing does require both halves this processing still happens unconsciously. Why? It must be because the "main" and "big" consciousness representing "me" is yet behind a few one-way barriers to this level of visual processing and thus does not experience it directly. It is only this big consciousness that has memories of itself and use of language and control over the body, so it can only describe and feel what is accessible to it via bidirectional exchanges directly. A similar fucky issue with this picture is the "little consciousness" question I raised earlier - I can look left or right, and this conscious move leads to a direct 1-1 change in my visual experience. So I have made a loop of one-way conections - does this constitute a new consciousness?

--------------------------------------------------------------------------------------------
| [Desires] <-> [Memories] <-> [Consciousness Loop] <-> [Processing] <-> [Other responses] | Big consciousness (self) behind a barrier
--------------------------------------------------------------------------------------------
^ -> [Move eyes] -> [See new object (photon field)] -> [Photon processing] -> [Visual processing] -> ^   is this a little consciousness loop?

Again I certainly don't feel like I've expanded my conscious experience nor created a new consciousness, and I don't feel new qualia either. So I am tempted to say no little consciousness exists. Yet if it did exist independently it would be far from self-aware and thus never able to explain its existence to me. Still, unlike my brain which keeps cycling/creating conscious states, the little loop is not a physical inevitability like a true clock or oscillator, I can decide to stop or slow down at any point and I form an integral part of the loop, so maybe there is no true loop here. But overall I don't have a good answer yet. It certainly seems likely that systems that have either a clock or memory are conscious ones, because their response to inputs are not a 1-1 function of the inputs but also a function of the system state, just like I argued would be the mark of the consciousness boundary in the human brain. Are all little loops conscious and forever separated by infinite couplings? This I cannot answer yet; one possible alternative is that consciousness *requires* non-infinite (ie 50%) or 'true' bidirectional couplings all along the loop, this is different from my initial conception but might be a necessary distinction that could also be tested by brain studies/fMRI. [f14]

Generally systems can "fail" - parts break and the system loses its stable state of operation. I can build safeguards such as a "watchdog" to check everything is working according to plan and try to counteract potential failures or otherwise not crash. But the safeguard itself can fail - and if *it* fails but the underlying system is still fine, a crash can result. So let's say we have:

1 - no safeguards
[System Running] --{99%}-> [OK!] -> ...
				 --{ 1%}-> [Failure] -> [Crash!]
2 - with safeguards
[System Running] --{98%}-> [OK!] -> ...
				 --{ 1%}-> [Failure] --{97%}-> [Safeguard works] -> ...
									 --{ 3%}-> [Safeguard fails] -> [Crash!]
				 --{ 1%}-> [Safeguard erroroneously arms] -> [Crash!]

So above, with safeguards there are now 2 modes of failure instead of just one. Have we made anything safer with this addition ultimately? How do we deal with this in real systems? Something like cars - thousands pass by me each day and not one has any component fail that causes them to run me over. This is not to say that nothing fails, but rather that there is no single catastrophic point of failure. This is done by splitting the system into isolated/low-interaction segments so the fault of one segment only has limited influence on the rest. Whatever happens to a windshield wiper motor, the car engine will still run fine, because the two are quite well isolated (still of course not infinitely). This is the same concept I think applies to consciousness: as I mentioned before, strict isolating/infinite connections define the bounds of consciousness, and within these bounds a fully interacting structure is created - a structure whose nature and goal is the solution of a complex optimization problem consisting of all the data making up our qualia/conscious experience. The experience is moment by moment (as continuous time experience poses again Zeno's paradox and a question of why we experience time at the rate that/as quickly as we do), each moment that we feel, what we feel is the complex intertwined structure formed inside the brain as it solves a new optimization problem. Drugs maybe change the nature of some brain interactions, potentially actually affecting the boundaries - nominally "infinite" couplings, thus te3mporarily actually expanding or changing the conscious experience itself.

I continue with the viewpoint of conscious experience as the result of a loop-like system within the brain creating a "feeling that feels itself". The idea of a loop is mentioned in [Smolin's "I am a strange loop"] and also gets nominal support in [Dehaene's] MRI studies which show back-and-forth information exchange between brain and sensory regions when a stimulus is conscious vs when it is not. Consciousness is bounded not by atoms but by infinite couplings (I propose). Within the infinite couplings of sensory inputs, our brain is set up to solve a huge optimization problem with a tremendous system of interrelated feedbacks, and a single step of this optimization when carried out creates all the feelings associated with my "self" - including seeing what I see, thinking what I think, remembering my past life experience, and keeping in mind my values/goals - all this factors into the optimization problem my brain constantly keeps solving. And note my brain can be reconfigured to solve different problems which actually changes my conscious experience in its nature: when I play a sport I set up my brain for rapid motion tracking and muscle response with minimal need for complex thoughts or distant memories, whereas for writing I'm quite the opposite. I cannot be both at once, I am a different person in the two cases because my brain literally works differently - the infinite couplings bounding consciousness can shift over time and change. This can also happen with emotions and drugs - literally changing what I consider "me", at least for that instance in time. [f15] So the "self" that sits at the brain's controls and has qualia happen to its stable nature is an illusion - there is no "self", the qualia feels itself by virtue of its complexity, and this qualia can change not just over time as experiences are gained but also in response to what it expects (ie writing vs sports), it is continuously recreated as long as I am awake and only the consistent memory of my past thoughts and experiences keeps the illusion of "self" alive.

I wrote earlier of the yellow line on the road - an arbitrary feature that thanks to human action and cleverly designed infinite couplings can control huge powerful vehicles without ever touching them. A sentiment is "pedestrians have the right of way according to laws of man, but cars have the right of way according to laws of physics", and this sounds forceful but then I came to the realization - we are physical beings, and just as our feelings are physics phenomena, so our actions are also guided by physical laws. So the distinction between laws of man and laws of physics is an artificial one - it is all physics, the reason humans follow human laws is their want for a good life and want to avoid punishment, and both of those wants are inherently physical phenomena like light oscillations or water waves. If we make a robot that drives, we will have to instill into it the appropriate wants by complex programming and wiring/hardware, and then we will have a conscious system that not just is capable of driving, but wants to drive. For how could it be otherwise? Without a want that defines its own satisfaction as an inherent good, what reason is there for any physical system to do anything? An electron flows from one place to another - why? We can say it follows the electric field and write all sorts of formulas to describe its path, but we would only get a better idea of how, the formulas remain static and lifeless symbols, the why is mysterious, inexplicable - we only know it exists and is real by being able to observe its consistent actions and effects on electrons, just like we only know other people are conscious by observing their consistent actions and comparing them to what we experience - the experience itself being inexplicable in any of our means of communication and perhaps physically impossible. The robot that I programmed to drive - why should it drive? It could just as well not do anything. Energy conservation does not help us here - all this says is inifinitely many statest that have a net different energy are not allowed, but there still remain infinitely many states of the same energy that are allowed, and why should any system want to go from one to another? I must introduce the notion of energy (information) flow, an inherently dynamic (time-based, not divisible into separate moments {perhaps even the basis of time itself}) phenomenon, to continue, and then it becomes clear that I've wired the robot in very complex ways such that its current energy state can only change in very specific ways and not arbitrarily. The most basic constraint is insulation on the wires: this prevents the electrons from immediately getting a dissipative outlet as heat, forcing them to flow through the microchip and logic gates. But the "forcing" is misleading: they are not forced (for this is not possible, like pushing a string or pulling from a vacuum), rather possibilities are removed so the remaining ones are all ones that are useful to me for calculations/satisfying my programming. [f16]

And I can measure the current/present state of this robot and record it. So now I have a point where the robot is, and I have the possible paths that go from this point outward into the infinite configuration space. Which way to go? And why go anywhere at all? It may be that physical conservation laws limit us to exactly one path through this point, and then we still have the question of which way to go. [f17] Entropy claims that systems will go towards a more probable/more common state, but the system's state being a point, how should the system know which direction leads to a more probable state? And more probable by what metric? Determinism, if accepted fully, gives no answer here - in determinism one state leads exactly to the next with no hesitation, as a logical certainty - determinism doesn't care whether the progression leads to low or high entropy, so cannot answer the arrow of time. It also cannot answer why we should have any experience of "the present" in a fully solved universe. Determinism is appealing, yet in the lab I find entropic laws to hold. How can a system move from one state to another except by perturbation? And how could the system know which moves lead to more probable/spread/dissipated energy staqtes except by doing a local optimization and itself picking the most-entropy-generating direction moment to moment? And could it be that such an optimization - just like in consciousness - results in feelings, the actions that increase entropy feeling inherently good and those that decrease it feeling inherently bad? There would be a testable hypothesis: a brain experiencing pain will use more energy than the brain experiencing pleasure, in the former case up-converting energy into "brain waves", in the latter doing the opposite. Physicists say entropy must always increase, could it be possible that it sometimes decreases? Could the suffering of learning be a necessary price of structuring the brain, reducing its internal entropy and turning it into an improbable state? So, when I program a robot to drive inside the yellow line, the yellow line becomes a physical law. By virtue of controlling a conscious system's operation, I am able to "design" my own laws of physics - like the law of the yellow line. This physical certainty with which the yellow line is treated somehow appears more obvious in the robot, but it is just as true in the human - the difference being that the human follows even more complex laws which may result in the yellow line being crossed from time to time. The yellow line itself is not a law, but given the correct vision+processing circuits+hardware system, it becomes law. As far as feelings go, note the distinction: the yellow line is not "good" or "bad", rather the robot is wired such that staying inside it is felt as a good thing. And we as humans all seek good feelings for their inherent goodness; no one would voluntarily undertake a painful experience unless there was a benefit for them, a good feeling to counter it: perversely similar to entropy increase required to be overall positive even if it is locally negative. And yet not the same: the brain's feeling good or bad is dependent on how the brain is designed, not necessarily related to external actions, ie feeling good does not mean I am actively increasing entropy with my actions (setting everything on fire won't magically make me feel good, similarly atoms don't spontaneously transform into the most stable isotopes), but rather that my brain's machinery is effectively increasing entropy inside the brain, and this could be arbitrarily associated to any action of the body. Same with pain: it is perverse that in torture, it is my own brain's mechanisms that create the feeling of pain that I experience, and this is a big argument of anti-natalism. Yet what this argument does not cover is that other people/animals may well have brains wired to experience more positive feelings about life overall, and those brains will not see life as mostly suffering, even under the same external circumstances they would get more pleasure out of life. Then is it any wonder that they will procreate+survive? Evolution will select for those who can't help but stay alive, and in the realm of feelings that means life is more or less pleasurable and dying is more or less painful - because the brain+body are wired to feel this way (by controlling *internal* entropy generation of the conscious part of the system) when these things happen. So even at the base physical level we are driven by feelings and we all seek selfishly to experience good feelings for ourselves - even in altruism and martyrdom. Feelings are the real driving force - a physical one.

Reading a book on internet protocols, I notice many incorporate a checksum to verify the integrity of the message. But what does the checksum do? If it is invalid, the message is discarded. But an error could happen to the message or to the checksum or to both! And if it happens to both, or to the message in a specific way, the error will be undetectable. Consider a simple message of 5 digits: 13192, then a checksum of the sum mod10: 1+3+1+9+2=6 mod10, so we add the checksum: 131926 and transmit the message. What can the receiver expect? Without a checksum, it will expect any of 10^5 possible messages. With a checksum, the expected space is not simply 10^6 but since the last digit is fully determined by the first five, it is perhaps 10^5 and a 10% chance of verifying success. Let's simplify further for easy processing: one-digit message and same checksum: 77, if the two are equal then accept the message. Then we have 10 acceptable messages (00, 11, 22...99) and 90 unacceptable messages (01, 02...98). An effective checksum must function such that a change from one acceptable to another acceptable message is highly unlikely - otherwise errors won't be effectively detected. With the 5 digit message, we have 1*10^5 acceptable and 9*10^5 unacceptable messages. So, instead of transmitting 6 digits of information, we reduce the info rate to 5 digits and in turn get a 10% assurance this has not been randomly altered. We change from a full 6-digit configuration space to a constrained one, still with 6 digits but now only specific states (10% of previous total) are allowed. The 10% sets are useful to us, the 90% are discarded, and knowing everyone will send a 10% state, if we receive a 90% state we can assume some communication error. This is like the noise elimination inherent in digital (binary) vs analog circuitry due to discretization into far-apart states, but taken to the next level of sequences of binary data so now only specific sequences are valid, these sequences are far apart from each other from the view of random perturbations, and anything in between the valid states is discarded as noise (or corrected when it is possible to find the closest valid state). Because of entropy spreading, a random effect on a message will be more likely to put it into the 90% than the 10% category (equipartition: if no states are preferred, all are equally probable), so we have the chance to catch transmission errors. If the effects are not random (ie impulses, 60Hz noise), the checksum algorithm may need to be tailored to be effective against those effects. We have the same sort of configuration space reduction in electronics and machines and possibly conscious optimization/operation as well. Imagine the microchip as starting from a solid block: in it, holes are made and thus all possible electron configurations are reduced to those useful to us, through spatial limiting. Transistors on the chip further limit or expand spatial pathways so the electronic configuration inside the chip changes in a useful (to us) way. The more complex the constraints (the more intricate the checksum algorithm), the more complex changes are required to shift from one allowed state to another one. Here I belive consciousness comes into play: it arises from the optimization process that is constrained by physical requirements (energy finding ways to dissipate along specific allowed pathways). Can consciousness exist in any form? If I make a robot that looks and acts exactly like a person (including claiming to have feelings) should I be certain that it indeed has feelings similar to what I experience? I suppose that's a bit tautological: is it even possible to design a human-like consciousness with electronics? We say "I can program a robot to act perfectly like a human" - but is there any evidence that this should even be possible? Matching all the complexity of the human brain while still looking like a human brain might well only be possible using the very mechanisms of the brain itself - DNA/cells/bio stuff, an electronic brain would be entirely different in operation no matter how much we program it (or it programs itself) so maybe the feelings it has, or claims to have, are utterly different from ours, something we can never experience or understand. There is then a physical identity of what it is to be human, that can only happen in the human brain - this is not to say that only humans are conscious, but that a (say) detailed simulation of the brain and all its atoms and electrons would actually have a different life experience - but then again, how can we tell what others' life experiences are like except by asking them? This simulated brain will swear it feels fully human. [f18] Maybe it is impossible to simulate a brain like this - maybe the entropic optimization has to take place in a "real" system interconnected appropriately to allow this optimization, and any system which does this really becomes conscious (by virtue of having the same connections), not merely a simulation. Maybe the human brain's optimization steps cannot all be encoded in a computer but must take place in chemical reactions, live. Otherwise, I have to concede that any system that does this optimization - whether biological, computer, electronic chip, or even hydraulics+levers, must have a similar range of vivid qualia to what I feel, a unifying life force as it were, because there is no reason to suppose a difference when all observable physical effects are the same.

So what is the self? One thing I can definitely say is that the self will avoid pain. What pain? Pain that makes the self feel bad. There is no pain if there is no self, the pain is not an object imposed upon the self like a person put in handcuffs - if there is no self there can be no pain just there, waiting. Pain requires self, it is a part of self that can be made active or inactive by external factors (like nerves on the skin). So, the self that exists/arises in a brain configured in a "pain" state will do anything in its power to attain a non-pain state in the future, and to not return to the pain state in the future. If there were no pain, there would be no reason for the self to avoid or do anything to deal with the pain - no driving force to keep the physical body from harm+death. The pain state of the brain is an inherently bad state, it is physically repulsive, it is to be avoided just because of its own nature. Pain isn't very practical without memory to learn to avoid pain, but I doubt pain requires memory to be active. All I can say objectively is that a self in a pain state will try its best to either take actions to exit the pain state or (as in babies who still don't know any actions) learn what does and what doesn't cause pain. Why the vivid and unpleasant experience of pain? Because if it weren't, we wouldn't take it seriously amongst other optimization concerns and would break our own bodies out of curiosity/carelessness. It must be serious and unmoved by our words/pleas so that it plays its role of forcing us to learn to stay alive. But there is a snag here if the world is truly deterministic. If all my actions can be fully described by atomic collisions (or whatever level of theory desired), evolution will still work, the strong and fit will still survive, and those organisms that procreate effectively will be selected for - abstract physical configurations that act in given ways so as to procreate. Very amazing, but there is no mention of feelings. There is no *need* for feelings in a deterministic world because everything is already determined. Feelings are extravagant, superfluous, a useless pattern imposed on a fully robust and self-standing structure: but I believe physics has no room for such pointless extravagance, I would claim physics exhibits elegance: what is necessary is stored once, and only what is necessary is stored. Then, feelings as physical phenomena (since we are physical beings) must have a real physical purpose: whatever our brain does, it could not do that without feelings being involved and required for that action. And if it did not have feelings, it would not be a brain at all, in its function or structure or effects on the world. An anesthesized person, a sleeping person, is a stark contrast to the awake, feeling person. One is not possible without the other. Are feelings localized? I can experimentally verify they are somewhere in my head, because as soon as some anesthetic chemicals reach the brain my feelings change or disappear. Yet a numbing agent can be injected into the skin, and I will lose that local feeling. Even further, my surroundings can be changed and I will have a different feeling of what it is I see.

So as before I argued that the conscious mechanisms are not reached until one is deep in the brain and behind numerous infinite couplings, it is very tempting to say the same thing about feelings. But the argument doesn't hold - if I see the world and something in my sight changes to the extent I consciously notice it, my feelings actually change. Thus if feelings really are to be taken as physical phenomena, how can I say whether feelings are to be localized to the brain? Two arguments: numbing agents in the brain produce the most concentrated effects on feelings we have found, numbing the skin has less and less effect the farther away we go - so the brain is at the "center" of a set of extending connections reaching out into the physical world, and if we are to claim any localization point a center would surely make a good choice. Second, as I argued earlier, our feelings (and *possible* feelings) are determined by our brain structure, not by our sensory organs or anything beyond. So "red" is not a property of a photon, nor of the eye cell sensitive to it, nor of the optical nerve, but rather as a result of the way the brain processes and stores that specific sensory input signal/pulse. It is a property of the structure of the brain. The infinite couplings all along make everything related to each other, but ignoring the couplings leads to a basically infinitely smeared/extended and intertwined set of feelings, selves, and optimizations that would be excessive even given the computing power of the universe - not to say, I don't feel like a "doubled consciousness" when with another person nor do I feel others' pain: there is a real localization in effect. So, feelings are localized to the brain (to the conscious part specifically): what do they look like to an external observer (like fMRI or ECG)? Could I draw a schematic of where this feeling is and what it is? Since I have claimed feelings come with energy spreading/entropy increase, consider a simple system:

[Gas 1] |Barrier| [    Gas 2     ]

In this system, gas 1 and gas 2 are at different pressures and a moving barrier is between them. When the barrier is released by an infinitely-coupled external system (which also did the initial charging) the barrier moves, oscillates, and eventually stops in the central position, its momentum energy being dissipated as heat as the oscillation dampens. This system dissipated energy: did it thus experience a feeling? If it did, I would claim all the atoms of gas 1+gas 2+barrier were involved in this feeling, it could be localized to the system above. If this is the case, then entropy increase is the sure sign of feeling: anything warm has feelings, and the good feelings arise when the system seeks equilibrium while bad feelings arise when the system goes away from equilibrium. {neutral feelings, like a color or thought, do not affect equilibrium either way} Then feelings are conserved in a way: not the obvious I feel better means you feel worse, but on a physical level between cells/atoms/infinitely-coupled systems. Properly coupled systems have their own feelings, and these feelings are conserved with their neighboring infinitely-coupled systems. IE if my brain feels pain, the nerve receptors actually feel pleasure at being able to dissipate their energy. The sun, and fusion reactors, then have very intense feelings.

I found an article on the [Integrated Information Theory] of consciousness, and it seems that this theory is much in agreement with my conclusions regarding the nature and extent of consciousness. It figures prominently that consciousness requires self-interacting systems: one conclusion is that such systems have intrinsic memory, another is that inactive (but *capable* of activity) elements contribute as much as active ones - leading to a quantum-like entanglement of probabilities picture. A concern I have with this theory is it is applied to logic gates and I think reality is "messier" than the neat diagrams - the question remains, how do I determine where the boundary of a real consciousness lies when in a real system every element can (eventually) interact with any other? Or is this assumption false? I will start with the memory aspect. Consider a 1 to 1 system: [A]->[B]->[C]. The state of A determines that of B, and in turn that of C. This system can only have one state, that of A, and is considered non-conscious. Similarly with multiple parallel chains:

[]->[]->[]
[]->[]->[]
Even though there is twice the "information" above, the two chains are fully separate and thus not conscious. Even if the two are 'tangled' but still strictly feed-forward this remains unconscious because the information content is the same.
[]->[]->[]
  \   /\
[]->[]->[]
There must be 2-way (loop) couplings to achieve a conscious system.
[]->[]->[]
  <-   /
   \  <-
[]->[]->[]

With 2 way couplings, the system forms an amplifier, which I argued is conscious, and which IIT treats mathematically. But what happens in a 2-way coupled system: why is it different from the feed-forward system? The single difference is that the 2-way coupled system responds differently based on its pre-existing internal configuration, while the 1-way coupled responds the same way to all inputs, wholly determined solely by the inputs. In other words, a conscious system must have a sense of self - memory, a state that it maintains in time. Just as transitions of electron levels yield photons, transitions of conscious system levels/states yield qualia of experience. [f19] It is strange, sure, but so is anything - that we see stars, or that gravity exists, or that we have 3 dimensions. That's just how physics works. This fact of system memory is significant also in the "usefulness" sense as I argued earlier: feelings like pain and pleasure must be physically meaningful tools rather than 'extravagances' and for feelings to be useful they must be coupled with a sense of self and a *memory* to either seek pleasure or avoid pain. It is senseless, physically, to have a disjoint sense of "pain" as a physical entity, and it is similarly physically senseless to have a system experience pain if it cannot take any action to avoid it (this is not to say that the *human* can avoid it, but rather the brain/neuron firing entangled system in its internal memory). With a "constellation" as in IIT system states, we see indeed that high-order feelings like pain would require a complex system of experience (ie cannot be "disjoint"), {this is argued more convincingly in a paper on [Geometry of Qualia] by the IIT authors} and with my above interpretation we see that all conscious systems have a "memory" that is inherent in driving the system state, ie its urges and search for pleasure. There is also an interesting connection here in terms of "tangled" systems: because conscious systems are tangled, their output can be sensitive to small changes in the input, unlike 1-way systems where outputs must change in direct relation to input. Namely, in a 1-1 system, changing 1 bit of input changes 1 bit of output. In looped systems, changing 1 bit of input could change the whole output, and this is *because* of the looped system's internal memory. This is seen commonly in hashing algorithms: they are designed to be sensitive to small changes, so small alteration of input leads to large change in output. This is not to say that hashing algorithms are conscious, but that our consciousness can act like a hash function. Indeed it is this action which is unique to conscious awareness - generating and using words and symbols as ideas and ways of action is in essence a hash algorithm as I argued earlier - and here is proof that we are capable of this physically. I cannot categorize an object or consider a word/idea if I am not conscious - and such categorization consists in fact of a hash function: I discard the "irrelevant" components and find "what matters", the words I end up using being only tangentially related to the physical-level input my brain receives. The next point, of real entanglement, is physically significant: in saying that the silent (but capable of activation) neurons contribute to the qualia shape as much as the firing neurons, consciousness takes on a mysterious quantum-like entangled state, "feeling" things without physically "touching" them, like the one-photon 'bomb' experiments where a photon is used to tell whether or not a detector is in the path without ever activating the detector. This seems acceptable in that it gives justification to qualia being a "vivid" experience of actively taking control, and suggests a response to the "why should we even feel time" argument if we are truly deterministic.

But here my criticism begins: I argued long ago that based on all observable evidence, we are deterministic and live in a deterministic world. I can simulate a looped system on my computer. It doesn't seem right that consciousness can be "nested", ie if I mentally simulate a looped system I don't magically expand my qualia possibilities. Does the simulated system experience qualia? Because I am simulating it, shouldn't I feel the same qualia - where are the simulated qualia spatially located? I think simulated systems shouldn't experience qualia as this could lead to qualia runaway and is physically even more difficult to define than the mysterious probability-entanglement that is our consciousness according to IIT. Then, it is either impossible to simulate consciousness, or we are in a tough situation - either deny the simulations swearing their feelings are real or find a way to represent thous feelings as caused by our (non-simulated) physics, allowing qualia nesting. Then again, simulating qualia-experiencing systems, just like evolving non-simulated conscious systems, requires also specific physical actions, and it is these physical actions which give rise to consciousness in the first place. As claimed by IIT, consciousness expands to cover the biggest integrated-information block it can take (like grain growth or solidification of a supercooled liquid), so perhaps a consciousness simulating another consciousness really does expand its qualia state - then again, to even be able to simulate a decent size consciousness, the simulating consciousness must already have enough resources/memory to do the simulation [f20] - and thus the simulation becomes inherently a part of the bigger consciousness - the two cannot be disjointed magically (ie to simulate a consciousness I must add physical elements to the real consciousness doing the simulation, but in doing so I satisfy exactly those criteria of creating entanglement that leads to qualia). I argued earlier that all systems eventually interact with everything else. The flip side is that as a piece of information interacts with more and more systems, it becomes less and less potent - the 1/r^2 rule roughly, thus total information is conserved but individual information always fades exponentially in non-100% interactions. But maybe not all information is alike: maybe the brain's configuration is a filter, in the firing of neurons allowing only select types of information to interact. As in a steam turbine at a power plant, only ~30% of the steam energy is in an appropriate form to be useful as electricity, the rest is just dumped as heat. And of the electricity, only few select states are appropriate to do computation inside the CPU, the rest not satisfying the requirements being dumped as heat. Requirements are set by the CPU circuit wiring. Similarly, our brain could act as an information pipeline/filter (recalling the fractal diagrams from earlier):

. \						/ .
	.				  .
. /	  \				/	\ .
		. (brain) .
. \   /				\	/ .
	.				  .
. /						\ .
as in this diagram, only some states will pass, and all the ones that do, contribute to our qualia, make it unique.

Then, not all information is capable of interacting with any other information - in fact these interactions are highly selective, which is why barriers and circuits work at all. Then the role of consciousness is relatively clear as maximal integrated information representations. Our qualia are then what the information that our brains selects for *is* literally. Feelings are the stuff of the universe. On the other hand, if all information can interact with all other, it becomes more difficult to draw the boundaries on consciousness. As I said earlier, I can close my eyelids or use my hand to move something, and thus create a 2-way coupling with the external world. Even if this coupling goes through infinite gearings, I could argue the coupling between two distinct neurons is also of an infinite gearing type, certainly not a physically elementary object. So why shouldn't I say that my eyelid or hand or everything I see are also an active part of qualia? Why stop at the brain? Why not consider every action I've ever done and all the reflective effects those actions have had on me? This would create a vastly bigger consciousness that eventually covers the whole world. I certainly don't feel this big, then again maybe time scales matter - because my actions take place on the scale of seconds and effects may take hours or days to reflect back to me, the resulting qualia may be very faint and just something I easily categorize as memory ("oh, I remember writing that earlier, I see how it affects me to read it now" - an abstract feeling but perhaps qualia?). Though if this is the case, the entanglement of all possible consequences of all actions would be gratuitous even for a universal simulation machine, and again my consciousness certainly doesn't feel more capable of new qualia due to such long-term 2-way interactions. It seems a certainty that all my qualia are defined by my brain structure solely - ie the scope of all things I could ever experience is well-defined, even though I only explore/experience one tiny point at any one time. The only way to enhance my qualia is by a 2-way coupling into my brain, or by removing some (as in injuries/lesions), and so far as I can tell I wouldn't be able to ever compare my feelings directly - with an enhanced qualia space, I couldn't comprehend or imagine what it's like to have a smaller qualia space. There is a true difference here between an "add-on" device, like a bark-prevention collar that shocks a barking dog. This device builds an association in consciousness, like biting our tongue teaches us to not bite our tongue, by rapid coupling of output muscle motion+input nerve signals. A brain implant to expand consciousness on the other hand leads to true new experiences - like "seeing" new colors, or feeling new types of pleasure or pain, or "visualizing" 4D space, or anything else. It would be inexplicable to another conscious system not configured the same way, like I can't describe colors to a blind person. The vivid experience itself would also be inexplicable to myself unless I have the brain implant activated: I can write about it but cannot feel how it is to have it from the viewpoint of not having it, the qualia experience can be one or the other but not both. Again I must bring myself to remember all our qualia experiences are real and physical - experiences like "floating" or dissociation or feeling "spread out in space" or light-headed are real physical artefacts and maybe they arise from some critical moments when our consciousness somehow gets coupled to states it normally doesn't have interactions with. Maybe there are some natural occasions where consciousness really gets coupled to "real-world" objects outside the brain or even us, maybe these are the strange life-changing spiritual experiences some people proclaim, maybe drugs somehow help reach such states because even the experience on drugs are still real physical qualia just as much as sober experience.

In further consideration of expansions of consciousness and its role, recognize that it has evolved in us to allow us to handle laws of physics - thus it is in a real sense a reflection of the outside physical laws/physical world structure, since a fit organism must have a consciousness which has prediction and control abilities with respect to the physical laws that organism has to follow. Perhaps this extends to design and shape of qualia: in an effective consciousness, the feeling of "cold" (for instance) will lead to actions which helpe the body eliminate the cold, by taking advantage of the appropriate physical laws (ie increase activity). We learn lots of things from society, but responses to our own qualia must be intrinsic and instinctive. Pain that cannot be properly handled is then an evolutionary mistake and represents a non-ideal consciousness arrangement, but evolution isn't too careful as long as breeding is not affected.

One thing we may hope for but haven't achieved is the ability to feel another's emotional/mental state - to get a full snapshot of what they are experiencing as opposed to the limited and ill-defined words we use today. This would be a revolution similar to the ubiquity of radio and TV now. When I say "this is frustrating", how frustrated am I and how does it compare to what another person is feeling? What do chronically ill patients have to put up with and how would I respond to that? Do two people really love each other and how do their feelings compare? Even if we had a way to connect two minds together (which we do already, in rudimentary form) such an ability may be out of reach - for to experience another person's thought, I would also need that person's memories, values/preferences, earlier experiences, and state of mind - in other words I would need to *become* the said person; but in doing so I would lose my own memories/preferences, thus not being able to truly compare the two - experiencing life as both myself and another. [f21] This will be true as long as thought requires a closed-loop system that defines itself - which I believe is the case. We may transmit and manipulate qualia, but we cannot experience another's mental life without losing our own, just like reprogramming a computer chip/FPGA. Since regular world experiences can change our mental lives, we then conclude that the "I" of today is a different person from the "I" of yesterday, and the two can never be compared - not even by me, the closest person to them. So perhaps our existence is a transient one - the "alien" thing I earlier called transient consciousness - where thoughts and qualia can arise and end, with periods of unconscious existence in between. [f22] Consciousness, when it arises, has access to all my memories and brain structure, thus it always feels like it is a coherent person although perhaps the conscious structure is only a few seconds old and will soon die. This certainly deals a blow to the cult of I - this time from the inside vs from the outside as considered earlier. The transience makes it easier to accept death in a sense - as we already experience it all the time. The key is to reduce suffering in our experience, and perhaps also to increase entropy to better fulfill our purpose in the world.

I am led to think of a computer based on a water-pressure circuit. What is there? A high pressure (voltage) water line and a low pressure one, and valves (arranged like electronics in a CPU) to switch it around. But switch what? In the limit of infinitely good valves (transistors), [f23] the actual water flow goes to zero. What is being switched is not the flow but the *potential*. A pure consciousness is pure potential switching, for us to have it real we have to employ some flow, but with modern electronics this flow gets smaller and smaller. A consciousness is potential field switching. This really works with the water computer because pressure is another representation for electric repulsion potential - effectively more crammed electrons thus a field gradient. Same happens in the brain - electric field receptors around the brain give consistent signals - the consciousness is the field shape (3D) inside the brain, at a neuron level of fine resolution in its specific warping. Going back to the water computer example, what controls how the water potential changes shape? With infinitely good valves, an observer watching this computer would just see different valves continuously opening and closing of their own accord, as if by magic. It is the potential itself that controls how it triggers other valves. In other words the potential affects its own shape. The system affects itself. It evolves. It performs a computation/optimization. It is a field interacting with itself, just like a particle. Maybe then particles are also conscious, at least when changing states/shape. [f24] Looking at the universe like this I feel a sense of awe at the tremendous computation/optimization conscious experience of the universe around me. Yet it is strangely appropriate: we interact with other people, who we assume conscious and who obey deterministic laws of nature, and so as well we interact with objects, and it is a nice unity to say they are also conscious and obey their respective laws of nature. And how could it be otherwise? A competition of forces is a competition of physical will, of who has the most urgent need to move one way or another. It is we who make the mistake of thinking we are free actors and that inanimate things don't have feelings, or that feelings are not physical/quantifiable. My only experience of the world *is* feelings, so maybe it is logical to say that the world *is* feelings, at different levels and looking for different things. Unclear and vacuous language describing feelings leads to the mistaken notion that feelings are made-up flimsy stuff rather than rigorous and consistent physical phenomena.

What delineates a conscious system? That is what type of complex shape-shifting the potential field can do. A tiny consciousness cannot act like a large consciousness by merely using lots of memory because its experience at each time instance is of a more limited nature: high consciousness operations/optimizations/qualia must use all of their variables *simultaneously* so as to not be split into smaller consciousnesses. As with IIT we can draw some rudimentary boundary: in the water computer example, if I connect the high pressure pipe to water main so as to have a steady pressure supply (just as plugging in a computer) it is logical to assume the computer consciousness has *not* expanded to include the whole water system, because its only linkage is 1 point which it can't really affect that much (since its flow is internally limited and ends up being degenerate: flow through any complex path within the computer system all must come through the single connecting link from the main water line, so the effect of the main line on which evolution shape is chosen is symmetric with respect to any shape, a bit of a Feynman integral mental picture). Then, I claim the "universe computer" takes an optimization step to simplify computation of complicated potential shape evolution: it splits the potential shape according to connectivity (similar to [IIT]) and then evolves each part individually, giving rise to individual qualia that only feel as themselves rather than as the whole universe. While I don't agree at this point with quantum mechanics metaphysics (too muddy and convoluted, not clean what is real) it is at least interesting to think of a qualia as a sort of very complex multi-parameter-optimizing wavefunction collapse following the potential shape of the brain/CPU/water-computer system. Then all consciousness must be associated with entropy (as already claimed earlier). I would emphasize that the *field* once again serves the unifying purpose of consciousness - just as it unifies millions of electrons in an antenna to send a coherent signal, it unifies the millions of neurons in the brain into a coherent qualia experience. The *field shape* is what determines what qualia feels like and also how the field evolves: any computation that matches the brain will then experience the same qualia (but perfectly matching the brain is no trivial task). And surely, if a simulated brain says "I have feelings" then we have to take the report seriously. So consciousness is everywhere in the world underlying every action we take/every interaction with other matter. [f25] My brain can interact with itself *much* more effectively than I can hope to interact with the world ie by speaking or moving my limbs vs neurons firing at ms rates and in a hugely complex network. So my brain feels more or less like an "island" separate from the universal consciousness, but to the tiny extent I can couple with other systems, my consciousness also ought to expand. It would probably require thorough brain implant connectivity to actually let me feel expanded qualia ie "extra limbs" to do/feel more. I can exert control over many things by going one at a time (given these things don't change when left idle), but this does not expand conscious qualia as alluded above - for big consciousness optimizations must be simultaneous, this expands the real-moment qualia possibility space, ie greater consciousness.


[f1] is multi-dimensionality (ie a distinct notion of perpendicular/orthogonal) required for an amplifier or even for gearing in general? Refer also to earlier moment article

[f2] perhaps the controlled system can be said to be "entangled" with the controlling one, as the former's evolution depends on the latter.

[f3] this can be interpreted in terms of the lever/gear article as an "infinite" gearing with the potential energy projection as 0 for any horizontal location of the sheet (equivalent energy) but rising to infinity for any vertical movement. Just like the finite gearing of a lever or pulled string, the infinite gearing of an amplifier conserves energy but provides no direct coupling between controller and flow - only a "virtual" coupling effective over time

[f4] another way of saying this, is the amplifier has an effect because its impact on the flow is of a persistent nature affecting its energy dissipation for a long time (and for this persistence to be true, the coupling must be either 0% or 100% because otherwise the flow itself will affect the amplifier and thus its persistence of action will be challenged), while the controlling side evolution does not require any energy dissipation and thus does not have much of an effect on time-evolution there despite its possible persistence. The difference between controlled side and controlling side is that the controlled side has the capacity to dissipate energy continuously while the controlling side does not. Consider a portable music player as amplifier: I press a button which results in continuous playback of music - only possible when energy/entropy is discharged/dissipated and evolution takes place - the play button either allows or disallows certain routes of evolution

[f5] such open/uncoupled systems must remain idle, the eery brainless body of a machine without its human operator

[f6] once I could hear oscillations (otoacoustic emission) which I didn't really control but which I felt were an effect of my thoughts - alternating between 2 tones - this really made me feel like a CPU with a clock source, a bit surreal. Based on this I believe a "brain frequency" of around 1kHz, a paper lists maximal nerve fiber fire rate at 2kHz.

[f7] An unconscious system is "open-loop", inputs to it have a single effect that eventually reaches equilibrium. A conscious system is "closed-loop" so an input produces continued perturbations on itself.

[f8] seeing an amplifier as an extension of gear/lever, an "infinite" gearing, we conclude that any finite (not 0 or 100%) coupling leads to an attainment of thermal equilibrium and death of dynamic processes. A continuous energy/entropy source is required to keep a consciousness sustained in a real dissipative system.

[f9] or maybe even programming, but it may not be easy to establish a CPU's consciousness - from what I understand there aren't complete loops in a CPU and it is instead externally driven by a clock+data; the clock itself may be an elementary consciousness though, as outlined below. If this is believed then any consciousness must have its own, internal time basis - which is certainly supported by our subjective feeling of time that is situation-dependent.

[f10] this might be a definition of life: counteracting natural forces to keep itself away from thermal equilibrium. Then a flame is "alive" even when its constituent atoms/molecules reach thermal equilibrium and dissipate/"die" eventually (?)

[f11] yet on the other hand it does. Using a machine, in any moment we may experience 'normal' qualia of the body, but the succession of qualia over time is unique to our use of that machine. The feelings I get when riding a bicycle cannot be recreated any other way except riding a bicycle, and those feelings are a unique consequence of coupling to the bicycle as an external conscious system; same with driving a car.

[f12] rudimentary computers could be said, in contrast, to operate purely on binary data, which is relatively less complex - only we interpret different binary data as sound or video or text, the computer handles it all the same. Yet modern computers are more complex: they operate on whole 32-bit and different varieties of data structures simultaneously.

[f13] to be more explicit, the feeling of an "I" feeling itself is a feeling (like abstract thought) that is enabled by the brain structure/loop complexity. The brain generates feelings, and they don't need an observer - just like an atom in space exists+interacts whether someone sees it or not. Most feelings are simple and not nearly self-aware, but the feelings our brain generates have the ability of self awareness and thus feel as if there is something that "is feeling" them, while in reality this is all part of the feeling space/spectrum.

[f14] 50% bidirectional couplings all along the loop is also an intrinsic property of the field, further suggesting that the field itself is a good candidate for consciousness

[f15] In survival situations, people end up doing things they would have never considered normally - they literally become different people, just with the same memories

[f16] there is a relation here to the ascetic lifestyle developed in spiritual practices seeking enlightenment

[f17] how quickly to go that way is a constant based on speed of time/information evolution

[f18] here we see again the effects of 'invisibility' of a consciousness towards its own experience in a simulation. The physical laws are such that a simulated being in our universe cannot see the nature of what simulates it - like relativity blocks us from gaining any absolute reference point. So it could very well be that a simulation of a conscious brain leads both to 'weird' qualia of the computer doing the step-by-step simulation compounded with the 'human' qualia that the simulated brain will claim to feel.

[f19] and as photons cause electrons to rearrange themselves, qualia cause complex systems to rearrange themselves from one eigenvalue to another

[f20] it is fitting here to consider our place in the bigger 'world consciousness' as simluations made by and taking computing resources from it to turn towards our goals

[f21] maybe there is a non-trivial connection here with similar limits in quantum measurements. Both come about from intrinsic epistemic difficulties of a desired measurement being incommensurate with physical restrictions.

[f22] There is an argument [Unreal Universe] that our vivid experience of time is only possible because of this strictly temporally bounded existence of conscious activity.

[f23] that this infinitely-good computer is *not* possible is an important universal property: there is some minimum energy dissipation/flow to do computation/gradient shape shifting

[f24] again assuming as in [Unreal Universe] that a sensation of time is based on bounded experience, our qualia are felt as such because the nature of the brain ensures that the spontaneously-arising 'conscious particle' within can only live a short time until its available energy is dissipated. Meanwhile physics particles like atoms do not dissipate energy and are effectively 'immortal' thus they have no experience of time (in an equilibrium state).

[f25] when I die my brain's computational ability is returned to the greater world. This ability is the 'freedom' (as in degrees of freedom) of the brain network to perform optimizations, which comes at a cost of excluding other potential optimizations.

«« 6. Information Couplings    7. Conscious Systems    1. Determinism in Society »»
Contents