Deterministic World

Part 3. Self

1. Brain Operation and the Self

consider conscious processes in our brain: we have different senses and abilities, the different qualia have different aspects that speak to their purpose and effect. From introspection, for instance we have a 'stack' buffer where the conscious "me" plans intentions and actions in an abstract way, like "move the fork here then pick up the food" or "drive straight in this lane then make a right turn". These abstract plans are then processed and executed by subconscious processes which also employ feedback from the senses - these control loops are fast and take no further conscious effort, letting me continue thinking. I still feel what is going on and have the ability to stop or change the plan at any point based on my conscious evaluation of the situation and whether my actions are still appropriate in an abstract sense, something which the subconscious processes cannot do. In terms of feelings, pain for instance has intensity and body location. Hearing has frequency and intensity and external spatial localization. Sight has color and intensity and external spatial localization. Taste has flavor and intensity and localization to the tongue area. Motion/acceleration sense has a direction component but no localization (or maybe roughly to the head area). All these properties ought to be reflected in how the qualia are actually encoded in the brain network and may give hints how to find them in brain scans or maps.

Consciousness is an infinite-coupling bounded optimization system, that is in a sense “taken” from the universe grand consciousness at birth and then returned at death. It is a system that is split from the universe and can act independently in it. Our qualia experiences are a small subset of universal qualia.

Human brain is sensitive and responds to certain systems well. For instance, walking and throwing/catching are tasks that modern robots can't do even though they can beat us in chess and calculations. It then might be possible to transform "difficult" problems into "intuitive" ones with knowledge of what the brain works better with. This is both in terms of system outputs (flashing/moving shape, sound to alert, colors) and in terms of interaction (immediate feedback, proportional response (can learn by "a little more/a little less" approach), consistency of response over time).

I argue that thoughts and brain function are basically built from a "blank slate" by an algorithm that enables learning (at first by trying random patterns, then by forming associations when any repetitions are recorded). This is since humans are able to learn+act as other cultures with enough time. Because of the advanced skills of walking/throwing requiring well-developed algorithms, I argue the thought algorithms will be similar in nature. For instance, walking is rhythmic, usually unremarkable but with each step unique. The brain is unique in being able to "feel" its own operation - sense itself (to some extent) - which we experience as thought. Most thought is also repetitive, easily forgotten. Everyday and common experiences are largely forgotten (so adults may say "the year flew by" as an indication of much regularity in the events of the year - the compression done by the brain has merged all similar days together so the year seems very short). Unique experiences are unduly emphasized.

Based on the ease/difficulty of memorizing and visualizing things I claim:

Using as a starting point the idea of visual harmonics [Lehar] (that describe a scene in terms of simple objects as represented in the brain - the simple (ones we consider simple/natural - of course!) objects like cubes, spheres, triangles) and body harmonics (the movements we make naturally - in walking, dancing, playing music, sex) and musical harmonics (in listening+singing in certain pitches), I propose the idea of thought harmonics - these will be evidenced in "natural" human creations, specifically: fables+myths, common elements of stories, religious+scientific (early) knowledge, debates+logical fallacies.

Here I focus specifically on stories/fables inspired by observations on a baby's life. [f1] A modern baby's everyday experience is a fairly sheltered one yet he is told stories not about everyday or familiar items (or even physics, as it were) but about unfamiliar, strange animals that also happen to talk. Why make such a demand on the infant's memory? Perhaps because the parents like it - just like baby toys are pretty/colorful/cute even though the babies don't care - it is the parents who buy the toys so they have to be appealed to. But I think there is more to how such fables came about (more, even, than just mindless words to keep the baby entertained, as would be done by any desperate mother before fables/lullabies/fairytales as such were widely recited) which shows some underlying aspects of human thought. For one, they reflect entirely dissonant beliefs - animals who talk and build houses - which reflects human ability to abstractly connect otherwise unrelated topics (this dissonance is also seen/taught in early childhood gameplay - feeding plastic toys, imagining toys as living or thinking or, more vividly, *moving* entities - like playing with soldiers or dolls or dinosaurs). Such dissonant connections ought to be tough for the growing baby to reverse, but where did they first originate? Probably as a combination of a strong desire for information/explanation/closure (evolutionarily a necessity so that any learning - an extremely difficult process - get done at all!) and also strong stereotyping (similarly necessary so the brain can make sense of new situations/use little memory), and perhaps cross-overs from personal experience (projective explanation - how can lightning happen? It is loud+sudden, a person must be throwing it (whatever 'it' (a unit of lightning) means, just like talking animals)) and over-willingness to accept an intuitively "OK" explanation (one that doesn't conflict with *previously* held beliefs) rather than a rigorously proven one. [f2]

And yet these discordant, accidental, illogical associations that result may play an important role in thinking - namely providing a "randomizing" element in a local peak-finding function to make it more global, a key element in "creative thinking". Then it follows that not only language (due to accidental similarity/connections between words that are written/said similarly) but cultural heritage (fairy tales, religion, explanations) play a role in a human's thinking abilities. As outlined by [Anderson], DNA's two strands are prone to error - this could be fixed by using redundancy or some error correction but it is not in living beings - because errors lead to mutations which can generate more evolutionarily fit beings and the beings which can evolve (by genetic mutation) will outlast the unevolving ones in a diverse and changing environment. Humans and mammals can also evolve in a non-genetic way by passing on/discovering behaviors and actions [f3]. Language is a key component of this evolution for humans, serving as a means of recording and teaching others of behaviors and facts important for survival and prosperity. Language is also quite loose - what exactly is it that I mean by "quite loose" for instance? Perhaps even I don't know although I wrote it down. Just like DNA, language is prone to errors and misinterpretations. But perhaps this is no accident - we could develop a language that is extremely precise and unmistakable, but this would not survive evolution. The non-specificity of language allows it to adapt to different circumstances and play a key role in mental connections and creativity. If there was one unique interpretation of every statement everything would be perfectly factual, whereas because of different interpretations I may see connections - perhaps arbitrarily - and come up with new "original" ideas. So even imagination and creativity are deterministic, and language-driven.

We can also recognize the common appeal of certain stories/thoughts/ideas across cultural heritages, as then these concepts must be a part of our biological programming (as opposed to culturally evolved capacities). They include: buildup and resolution [f4], repeating/common elements (rule of 3), focus on a narrow group of actors (or in music, themes), ideas of supremacy/power/self-improvement/sacrifice/progress.

This writing itself may be used as a case study of brain function, as I write it in the order that I think of the concepts. I claim that humans as a whole are much less complex/intelligent than we (as first person viewers) like to believe. We already have evidence of this in the information/complexity realm with modern computers and detectors: calculations and measurements easily produce megabytes of data, but the human must reduce this to some text characters (perhaps kilobytes), and in the case of scientific data the normal approach is to then make a scatter plot and fit a line to the data - reducing all those megabytes to the two numbers defining a line. Computers have no problem handling the full dataset {so it is not surprising that now machine learning algorithms are more accurate in making a medical diagnosis than a doctor with access to the same data}, but the brain must filter the data to an extreme level to make use of it. It must be the case that such a simplification ignores some phenomena (or in fact most phenomena) of the full dataset, but the power of the brain's operation (and reason why supercomputers are not "smarter" than simplest humans) lies in extremely precise and efficient filtering of data into only the significant aspects - the aforementioned line through the data points is all that we really need to know - as evidenced by the success of this scientific-data approach. Humans intuitively select the "important" parts of patterns and can learn to do so more efficiently and for new types of data (what happens in classrooms - students learn to find patterns in new data, guided either by the teacher or by their pre-existing pattern knowledge+guesswork). If the above is true then our brains must be extremely sensitive to patterns, and since they store data as patterns (as opposed to bit-by-bit like computers (why we don't readily catch typos)) our lives must be characterized by patterns, as well as our thoughts. There is a way to test how pattern-dependent our lives really are (same applies to our thoughts) carried over from complexity and information entropy concepts from computer science - in other words, how well some type of compression algorithm would be able to "compress" a person's life and experiences [f5]. If every day is the same, such an algorithm could store many physical days of living as only one day of experience - something that happens in reality as evidenced by sayings like "wow the whole year seemed to go by in a day!", of course explainable by the exceptional pattern-matching of the brain. Applying this analysis to my life reveals an embarrasingly small repertoire of activities, but I know from computer models that finding an optimal solution requires extensive searching of the space - anything less will just lead to a local minimum (less optimal life); there is comfort in familiarity but too much repetition leads to a feeling of stifled/cabin fever.

Experimentally we can probe the brain's operation quite easily - the hardest part is accepting the outcomes as real. We have evidence of the brain's function in culture: art (styles, subjects, movements - existence of and progression), language (similarities across cultures, how emotions are conveyed, grammar as well as written/literary conventions), music (effects of rhythm, split into quantized notes, repeating phrases/refrain, rhyming words, power of imagery from lyrics with song), literature (the Hero's/Odyssey story, poetic language and rhyming (iambic pentameter), metaphor and simile, theme and tone, types of events and imagery discussed and left out, skew of forms towards certain elements ie more dramatic dialog than action dialog than setting description), engineering and science (evolution of ideas, which things are considered important, classifications and sub-classifications and more ways to keep many items orderly), history (how people rise to power/lead, what people believe in, what people study/record and how they reinterpret it), religion (people's most private views of punishment and salvation and sacrifice/martyrdom) and politics/media (influencing people (cults?), winning arguments, bribes and ethics, what people like to read/watch on the news (celebrity crap)). Games also give a unique insight - they are intellectual activities designed by a brain so that undertaking them feels like a pleasant way for a brain to spend time. So the brain likes some competition, some challenge, clear rules, and the ability to make choices/affect outcome, at least from a rudimentary view of board games. Surely more impressive insights into brain function could be obtained with a more careful analysis, ie the level of complexity that the brain likes to handle (chess was made to be a certain level of complexity - why not more or less?), and the things it finds challenging/interesting/enjoyable/rewarding. The patterns are out there, and it is only a matter of realizing they exist (Google and other 'big data' companies *already have* this data, thus are in a position to be *extremely* and *undetectably* manipulative of the general public. [f6]). We have evidence of the brain's function in self-analysis and others' reports of self-analysis: memory (forgetting things, remembering incorrectly and swearing one was correct, 'knowing' the object but not being able to recall its exact nature - "I would know the name if you said it but can't think of it now", taking longer to remember some things than others, memorizing only some aspects of the data, skew in data representation - perhaps I remember that "it was a foggy day" but not much more), thinking (progression of thoughts, depth and time of focus on single topic, recurring thoughts, 'single-track' as opposed to 'multithreaded' thoughts, pauses and gaps in thoughts (seen by self and also not seen by self but seen by others), time of completing a thought, ideation and its stages and its dependence on earlier memories/thoughts), music ('ear worm' tunes that 'play on loop' in the brain - what's so special about them?, lyrics and melodies that are emotionally moving, instrumentals and their effect on emotion and thought, which sounds in ambient environment one finds pleasing vs not), visual (visualization of 3D shapes - but not overly complex ones (what *we* call simple/complex in the first place), preference for certain works of art or natural settings or visual patterns, what gets captured and emphasized in drawings, how our eyes move across a scene), emotional (what makes us feel a certain emotion, also what effect different emotions have on our behavior and "rational thinking", how emotions transition from one to another and on what timescale), and general mental (what things we find difficult, pleasant or unpleasant, intuitive or confusing, how long it takes us to think something out, what fallacies of argument and reasoning we have fallen prey to and why, how do we think (an internal "voice" or a running text or something else? can more than one "voice" be active at once?), how precisely do we think and how precisely *can* we practically think (solving a tough math problem for instance), what errors we make 'accidentally' (like a typo or misspoken word) and why we so readily ignore them - there are no accidents in a deterministic world so the errors also have a specific explanation). Finally we have evidence of the brain's function through interactions with other people: experimental studies (development of traits and abilities in children, memory extent and effects of environment, power of reinforcement, "group action" (ie changing one person's opinion)), conversations with others (what topics I choose, what topics they choose, how conversation progresses, how both participants are affected emotionally by the exchange), and activities with others (effect of physical activity on mental function, emotional closeness vs physical closeness, changes in thinking that take place in teamwork and with newly formed teams over time). The cultural and historical evidence is vast and requires no "probing" - the data just needs to be properly analyzed as it is already and easily available. Self-based evidence requires an acceptance of using oneself as a subject and willingness to honestly undertake and report results in an experiment - usually this is easiest with external (environmental) devices both for recording one's actions and for providing novel stimuli for studies (another justification of why there is no substitute for really experiencing something - living "vicariously" is *not enough* for this type of analysis). Others-based evidence can be found in psychological studies and can readily be obtained from everyday interactions as long as one knows what to look for (again, you cannot watch for a pattern that you don't know exists).

As alluded above, in terms of processing and data interpretation the mind is not too impressive compared to a computer. The real power of the mind comes in its extremely efficient and robust memory storage, access, and association structure that can often lead to solutions with minimal computation (the 'solving' in solving an algebraic equation means changing the form of the equation until the solution is obvious, rather than numerically computing the answer - similarly we change the mental form of a problem/its representation until the solution is obvious with minimal 'brute force' computation). Among the effects described in [Kahneman] are: halo effect (a single/first-in-time likeable property causes others to be interpreted favorably), associative memory (3-word groups: RAT - indicating that similar concepts get activated at once), context-dependent decision making (factors of what is on one's mind determine answers to same question, such as evaluation of parole or patient diagnosis, at different times based on earlier circumstances), and intuitive expert judgement (firefighter commanders, chess masters). The focus in [Kahneman] is on cognitive effects, but here I suggest an interpretation for how memory functions. A quote from [Herbert Simon] on how 'experts' like airline pilots can make decisions and operate complex (to those lacking pilot training) control panels: "The situation has provided a cue; the cue has given the expert access to information stored in memory, and the information provides the answer". The part that stands out to me is "the cue has given the expert access" to a particular memory. The insight here is that the memory would not have *been accessible* without the cue - the expert could not "will it" into his mind without a requisite cue or line of thought. Even our memories - something we like to think we have control over - are actually entirely outside our control and instead affected by external factors. We cannot "browse" our memory as we might browse a computer disk drive, knowing that eventually we might make our way through all memory and suddenly seeing old files which bring surprises. It is impossible for us to tell how much data is in our memory, and similarly impossible for us to stumble on a surprising memory without an external aid (such as an old photograph/souvenir, making such aids have a special cultural prominence). We can only access memories related to what is on our mind at the present (or recent past) moment. Thus I believe that our memory has the overall operating principle of a hash table - where relevant properties of an object (its name, basic features) are hashed and act as a pointer to areas of memory where additional information is stored. I further claim (with less evidence but support from information exchange principle) that the storage of information to memory takes place by transferring recent (short-term memory) and current experiences into the memory areas indicated by the hash value, ie there is always an information exchange with memory and world, rather than just access. This implies a few things: our thinking is context-dependent because present feelings (hunger, tiredness, emotions) will affect not only processing but also the hash that lets us access memories, so a thirsty person may more easily remember episodes that include water. (After writing "emotions" I now automatically recall a song where that word is in the lyrics.) Next, our learning is also context-dependent - this explains how irrational fears arise and can be de-conditioned by repeated exposure to a stimulus (creating that state of mind) without the associated scary action - the lack of consequences, or their presence, gets stored in memory automatically associated with the hash of the stimulus. There must be some time constant where the consequence occurring at a long enough delay after the stimulus does not result in a coherent association because by that point the state of mind has changed to another 'hash', this may well be the case with emotional/chemical effects and leads to cycles that seem irrational but that are nonetheless stable. When I get back a school assignment many days after completing it, to make any use of the feedback I must re-read and re-understand the problem (again entering that state of mind) to then interpret the feedback and modify my approach.

Limitations apply to memory - we cannot remember a train of thoughts or any other structure for long, because eventually external circumstances change our state of mind and make it unable to recall the memory (even though it is still stored, it is inaccessible without the hash - it may be suddenly remembered later on when external circumstances change yet again). This also happens when "I know the word but can't think of it" or "I've heard that melody before but I can't tell where" or "I can't remember it now but I'm sure I'll think of it sometime later" or "I could find his name on a list, but I can't bring it to mind". In this case, a fragment of information is found (perhaps from a similar hash - on the topic of collisions) but its associated hash is unknown, thus access to most of the associated information (word spelling/definition, song title, person's name) is impossible even though the mind can usually assume the information exists (sometimes the similarity ends up being an illusion, the melody I was sure I've heard before might actually be new). The original hash may come to mind if another fragment of information is accessed (perhaps there is a "hash recovery" algorithm) or seemingly at random {indicating role of external circumstances - emotions +feelings - on the hash generation}, but once it is available then all the previously unknown information suddenly "comes to mind" and is now clearly accessible. The person achieves a frame of mind that can access that earlier information. Logically, if the person could not access that memory for recall without a peoper state of mind/hash, it is difficult to argue that learning or storage into that "memory slot" [f7] could be possible without that same state of mind/hash.

A hash may give access to associated ideas - either multiple hashes are tried, or more likely (since the brain can operate in highly parallel states on the unconscious level to achieve processing efficiency) actuation of a hash also activates "nearby" memories - a process called "creativity" or "imagination". If the hash algorithm was strictly highest entropy, we would expect associations to be largely random - yet more often than not there is a logical connection (in other cases the connection may still be circumstantial - but not random). Thus more likely the hash is a "logical" function of information - nearby types of information will have nearby hashes and nearby storage in momery, allowing associative activation when only one concept is directly referenced. Then "creativity" in a person is the ability to explore larger regions of memory surrounding the central "prompt" topic, then coming up with the "creative solutions" based on more possible alternatives than people who only retrieve limited items from memory (because of brain function differences or because they don't have many similar memories/experience in the field), to the limiting case of a person who can only think of the immediate problem posed with no idea for a solution available from memory - the least creative person in our sense, the student who looks blankly at a problem and doesn't attempt to go any further. People can be creative in different areas, suggesting differences in either brain operation (unlikely - brains are complex organs that evolved over millions of years and can operate robustly even with injuries/repair injuries - brain operation must be similar in all humans - stories and languages of all cultures have similarity) or early (infant/toddler) circumstantial learning and teaching {and "creative" as well as "non-creative" memory types, both present in people}, forming preferences that can lead to run-away processes (liking something leads to learning it more, leads to having it on the mind more, leads to liking it more). In this sense the structure of language, the "arbitrary" words used to name different objects and actions, serves as a way for a brain to organize its information and build its creative/associative abilities - people thinking in different languages would have different creative results (surely seen in literature).

A general problem with the hash table approach is collision - a new thought with a hash similar to a previous one, will tend to overwrite the old contents. This is not an issue with an identical/same hash, as in that case the old memory will be accessed beforehand, allowing evaluation of the exchange (the exchange will happen automatically still), and in any case with a good hashing algorithm an identical hash means the memories are all compatible/relating to the same topic. But with a "close enough" hash, full recall of old information may not occur while it is overwritten with new information. Thus old memories are subject to corruption over time, not outright deletion or disappearance but slow partial changes that eventually leave no remnants of the original memory behind - especially if they are similar to everyday experiences or are otherwise accessed often (since they are subject to overwriting any time they are accessed, by association with the situation at hand) {This may be another reason we place undue emphasis on unique/out-of-the-ordinary but rare events vs the commonplace but consistent events.}. For instance people may recall the wrong color or attribute of some childhood item (claiming they are sure all along!) if they've had many similar items in their lifetime; having told a certain story about an event in their life for long enough, people may claim different things happened or to a different extent, or even take on the point of view of another observer in the story (if some part of the story comes from/involves that observer's view). Additionally, since similar experiences will have a similar "hash", prolonged or repeated experiences will not be remembered/recalled for their real full length but only for the smallest memorable associated hash, a built-in compression algorithm that makes us underestimate the power of duration [f8]. Thus it may be heard "the year flew by in a week" - every day was similar enough to others (except for weekends) that at the end of the year there is only a feeling of the memory of a generic day rather than 365 unique days. This also implies that the unique features of each day are overwritten and forgotten with each subsequent passing day, as implied by the above mechanism of hash access.

Finally we get to the topic of learning - the desired storage of knowledge into memory for future use. What are ways to make learning efficient and useful? The hash table model suggests: maintain a relevant context for learning (avoid rapid shifting of subjects, repeat background ideas to establish a context (intro/background) before presenting new information in said context, mention associated concepts as practical to help students build effective associative networks (conclusion and future work)), repetition within said context along with consistency will help learning, coherent sets of ideas which can all be linked together in similar areas in the hash table will be remembered much more effectively than disparate and arbitrary ideas so form a narrative/curriculum that highlights how all presented ideas establish each other and stand in relation to each other, avoid teaching on subjects that are easily overwritten - pedantic or everyday situations - make the context unique to the subject (the physics of spaceships is somehow more memorable even though earth is a 'spaceship' in its own right). Have a single coherent story or authority to avoid overwriting and/or cognitive strain (people have a drive to find - not the truth but - a convincing story/explanation for an event to maintain the illusion of control in a purpose-built world - claim that you know why a random event occured with enough charisma and you will be respected/trusted). The above recommendations apply for learning "intuitive solutions" - approaches to problems. By these criteria existing schools aren't too great: 'information-dump' lectures can't lead to learning as the brain is not a memory card, it will only remember what's relevant to its mental state - explanations of why and how as well as practicing are conducive to learning, in this sense I felt most of my learning in grade school was from reading a textbook while doing homework, not from sitting through lectures; the constant shift from one subject to another in a haphazard manner throughout the day and in the expected homework assignments also didn't help much; my biggest criticism though was total lack of rapid feedback - graded assignments (with hard-to-interpret comments, or none at all) would be returned weeks after completion, by which point both me and the grader forgot the mindset necessary for that assignment, so a productive discussion of why I made some error or what I should have done instead was impractical (and even more so when also keeping up with the new assignments handed out since that time).

Beyond just "learning" a purported role of schooling is to build "problem-solving skills" - an application of the learned material in an appropriate manner to approach a new problem. This involves intuitive processing rather than purely memory [f9], and has a different set of requirements, much more stringent since processing performance is not very high. Namely, the processing operational memory (short-term memory) is very low - a few concepts or digits at a time, [f10] and the results cannot be held in memory for long without a very convincing context to store that much data in hash memory. Thus we get a different set of recommendations for "intuitive problem-solving skills": make problems manageable - set up an algorithm to simplify complex problems and introduce algorithms with the simplest/fewest-variables cases for best understanding; provide immediate feedback of the computation result (or algorithmic approach, such as finding a proof) including not just where something went wrong but why and how, also explain alternatives to the "correct" solution to encourage associative analytical thinking rather than overconfidence or ignorance of alternative approaches; to the greatest extent possible let students solve associated problems so they get practice with problem-solving (this is more work for the teacher, because often students will solve problems "their way" which is a great sign of learning and creative understanding but makes it harder to correct or justify so is usually squashed in favor of a standardized approach); provide many practice problems with immediate feedback so the student learns when different solutions are appropriate (by a mechanism similar to PID - when learning to ride a bike it helps to practice a lot+gain exposure to different road conditions; failure is a necessary part of optimization (why companies that make things that "can't fail" regularly set up test situations to achieve failure and understand it)) - it will be faster if the examples concentrate on one aspect/parameter at a time. Again the school system isn't great - with feedback being late (days/weeks instead of seconds) and ineffective [f11] (grades - causing an emotional reaction and failure recorded as a permanent negative, instead of direct feedback without judgement and encouraging exploration), and judging ability by tests - something on which students have no feedback and is in a different context from both class instruction and homework. The increasing class sizes and actual (vs purported) position of school make it impractical to implement the above suggestions, so individuals should take steps to get any desired knowledge+training on their own (and children do, through video games and arts hobbies and sports and social interactions where they get instant feedback). Ultimately all learning must be self-motivated, as it involves the student committing concepts to memory this will only happen when the student chooses to do so because it benefits him - either because he likes the knowledge or because he wants good grades; a student who has no personal interest in learning, an internal brain reward and urge for it, will not learn with all of the best teachers' efforts. [f12]

A tangentially related memory/learning occurrence is in the classification of items. It seems there is a predominance of statements like "in this bookstore, it seems either you love the book or you hate it" or "classes here are either impossibly difficult or a breeze", namely classifying into two explicitly extreme/opposite groups. This is even when, purely by statistical consideration, a nice "medium" would be much more likely (see also regression to the mean). There may be three factors at play here: first, extreme cases (a particularly good book or difficult class) are more memorable as argued above by the structure of memory; second we have an evolutionary incentive to learn only what is necessary, and if everything is according to expectations we know enough to predict the next step and stay in control - no further learning is necessary, but if we are surprised by the experience as it does not match our expectations then we better remember the experience so as not to be caught off-guard again; third, thinking and evaluating in terms of extremes (especially 2 opposites) seems more intuitively obvious and accessible than a more complex scale system - so we have the 2-party system, team vs team games (why not 3 or more teams?), good vs bad, pain vs pleasure, and other extremes to describe rather complex multivariate underlying phenomena. For learning, then, important concepts should be presented in a memorable - surprising - way. And we must be consciously upholding the mental discipline to avoid rash classification into extremes while seriously questioning such classifications made by others.

We are capable of remembering situations as images - 3D settings rather than 2D projections (even situations in a comic book are "re-interpreted" as 3D scenes for memory). The 3D scenes contain simple geometric objects (perhaps vague textures also - recognition of textures when seen is much more robust than mental "generation" of textures). The scene memory also involves an idea of the identities of included objects, perhaps names and properties. Beyond this, from memory we can only generate a rough outline of the "picture we see", but the details we do recall are usually particularly useful/important in a physical sense such as which objects are touching and which ones are larger/smaller. This scene memory is also present when looking at the world in realtime, and our ability to identify objects comes from accessing this memory (it is created automatically from real-world inputs by visual processing circuits in the eyes/brain), yet if we find some feature that is particularly interesting while we still see the scene with our eyes we can choose to concentrate on specific details - this latter and high-information aspect is unavailable once the visual stimulus is removed, so we have a conceptual memory which remains without the stimulus and can be used for conscious analysis and we also have a "visual buffer" memory which contains the whole field of view of our eyes at a present time and can be used to get close details but this changes as soon as we look at something else, somewhat like CPU registers vs video card buffer. This is why re-watching movies or even static art can be rewarding: without the visual stimulus we cannot get the same qualia experience from memory (like a computer could - once having a movie in memory, it could play the file back again and again), what we store after seeing a movie is only high-level plotlines, on a second viewing we might well find there are details we missed; just because one's video/music collection takes up hundreds of GBs does not mean their brain's memory capacity is similar. We can simulate basic "ragdoll physics" but this requires concentrated effort - memories are static by default. We do not remember "video recordings" of events, but only specific fragments and a vague idea of transitions from one fragment to the next. One of my artistic attempts involved showing a static photograph with an associated sound recording taken at the time of the photograph - this single visual snapshot is enough to get a sense of a video-like progression when a sound snapshot is also present. "Video" can only be visualized by a full simulation effort - usually only an illusion of "video" occurs, which is actually a series of discrete jumps. Imagine catching a ball that was thrown towards you as it is getting closer and closer - this probably involves one or two static scenes - a ball far away, and a ball in your hands. An attempt to mentally trace and visualize the ball's path between these states, moment to moment, as if watching a slow motion replay, will prove very difficult {you may notice that at some point there is a discete "step" from the approaching-ball image to the holding-ball image}, although without an explicit focus you would have the sense of a realistic interactive scene with the ball flying towards you just from the static images - the ball has a "property" of flying but is not simulated by default. This lack of "video" memory [f13] is why slow motion replays are popular/enjoyable - if we could do this mentally there would be no need for an explicit slowing down by machine. It is also not remembered by default - due to large memory required - but the skill of catching a ball works and thus must rely on limited snapshots of the ball's trajectory, conceptual recognition of the ball and its location by conscious processes (accessing the conceptual memory) combined with automatic hand positioning based on visual inputs (accessing the much more physically accurate/information-rich visual buffer, or even a stage before that which is unconscious but very fast) guided by the conceptual findings. Our consciousness deals with concepts from highly filtered and abstractly interpreted memory, at a correspondingly slow rate of ~Hz, yet it can fluidly guide automatic/unconscious systems which use the full real-time sense input buffers at a more impressive ~kHz, because the interpreted memory also keeps pointers back to where in the sense buffers it came from (ie this thing that I have interpreted as a ball is in this area of the visual buffer, so the automatic/intuitive algorithm that will position my hands in the right spot to catch the ball is told to "look there! that's what you're tracking/catching!") - I experience this as being able to look at the world, feel the vivid sensations of color/shape/appearance, and at the same time tell that "what is represented by this part of my visual sensation is called a ball" - the two levels of memory are transparently overlaid.

A controller (human or PLC) needs the ability to impact a process that it wishes to control towards a desired outcome. For instance, maintaining a constant temperature (thermostat) or heading (bicycle) despite external perturbations. To describe the effectiveness of control, given that all the desired degrees of freedom can be independently controlled, one may consider how quickly the system responds to the controller's decision, a concept I would call responsiveness. Trying to stop a car by opening a window to increase air drag has low responsiveness, while using the brakes has high responsiveness. While the physics of the car stopping involves force magnitudes, responsiveness focuses solely on the time aspect - how quickly the car stops vs how quickly the controller wants it to stop. Quantitatively then we may define responsiveness = bandwidth of system / bandwidth of controller. A responsiveness of 1 (100%) indicates a 'temporal match' between controller and system - the system changes slowly enough for the controller to respond and correct it, and quickly enough that the controller cannot overcompensate. A responsiveness approaching 0 indicates a poorly controllable system - sluggish where the controller may induce instabilities by overcompensation (such as steering large ships). A responsiveness approaching infinity indicates a poorly controlled system - instabilities may set in that the controller cannot affect even though response to the controller is very fast from the controller's view (particle colliders, for instance). Learning (of control skills), where the controller determines a proper way to control inputs to achieve desired outputs, happens most effectively in a high responsiveness setup. In such a case, the controller gets rapid feedback on the effects of its actions and can adjust course in time. A responsiveness above 1 implies that sporadic outcomes can occur but the controller can still improve on average (this is the case in high-speed sports like baseball batting, dart throwing, driving). A responsiveness of 1 provides opportunity for consistent skill development/optimization to the capabilities of the controller (this is the case for self-paced activities like drawing/writing/puzzles, thoughts, leisurely walking). A responsiveness below 1 makes an environment ill-suited to learning as it is unclear what actions cause what outcomes because of the long delay [f14] (this is the case in sports like bowling, most large-scale projects, and vast majority of the educational system). The large-scale projects stand to benefit greatly by improving responsiveness - by making a Gantt chart [f15] where items may be re-arranged in real time project leaders can achieve significant efficiency gains by controlling a high-responsiveness simulated equivalent of the real low-responsiveness system. The Gantt chart in effect provides a temporal match between the human controller and the slow physical system/project. The educational system also stands to gain by increasing student responsiveness - providing instant feedback on student performance ("instant" feedback, or unity responsiveness, leads to least mental effort for learning - only the present state of system matters, not past or predicted future states). A learner will get most out of a class by applying concepts in a way that immediate feedback is attained, often done in practice problems/exercises for which the answer is already known or expected. We control high-responsiveness systems by either relying on chance and loose tolerances (driving, sport) or by designing a mechanical controller that serves as a temporal match between slow human controller and a fast physical system (computer stabilized flight).

Secure communication over an insecure channel - this is the ideal to which encryption technologies aspire. Is it even possible? If an adversary can manipulate the channel, there isn't truly a way to know whether the communication is really from/to the desired recipient - even with a cryptographic handshake, perhaps the adversary knows all the keys and can fully impersonate the recipient, making the "insecure channel" tautological for our purposes. So here I will assume an eavesdropping adversary only, and assume (as we all do) that if the adversary can manipulate the channel, they will not have knowledge of the necessary keys (not even from their eavesdropping) and thus most certainly the manipulation will be detected and ignored. At the minimum, the sender and recipient know each other's name/contact info (or at least one of them knows the other's) - otherwise no communication can take place. But any adversary will also know these names, as they are public knowledge. If the only thing I know about the recipient is their name, ie I have no expectation for what will be sent back or whether it will even be comprehensible, then an adversary can substitute anything they want and I would never be able to tell. And yet such an "unknown" state/full entropy (I have no idea what info will be transmitted/received next) is characteristic of most efficient/highest-bandwidth communication. Any part of the message which I know and expect may as well not be transmitted, but such a situation makes it impossible to apply any protection to the information transferred. There is another concern here - human comprehensibility. For communication to take place, whatever the letter/information is, it must eventually take on a human-readable and comprehensible/legible form - with words that are known, arranged in logical phrases and sentences that form a coherent meaning. [f16] Thus we must refer to already known concepts, greatly reducing the actual entropic possibilities of a high bandwidth channel. A book can have any infinite combinations of symbols and drawings, but we expect words in paragraphs and sentences which all make sense - this expectation must be met but does not carry communication-relevant information other than to confirm that this is indeed a valid book. Similarly, secure communication cannot use all available bandwidth for pure information transfer as this can never be verified or made coherent use of; instead some bandwidth is used on information that both sender and receiver can expect/predict, and this information provides assurance that the communication is taking place as intended. Encryption makes it difficult for an adversary to modify the pure information content by mixing the entropic and non-entropic content, such that a naive modification will be traceable by its effects on the non-entropic (known/expected) content. Encryption thus requires privately shared knowledge, ie things known/expected by both sender and receiver, but not known or easily knowable to any adversary. As I mentioned earlier, ultimately a letter/message must appeal to previously known concepts in order to constitute communication; the words and sentences must be legible and make sense, this shared knowledge of the language constitutes a signature of human communication - if I receive a valid letter/message I know it was written by another language-knowing being. But then how can any learning take place? To a baby, a properly written book is as cryptic as the most difficult cipher, because the letters and words do not evoke any ultimate meaning.

At some point a baby must learn its first word, not even knowing what words are, much less what they mean. I believe such learning takes place by powerful coincidence-finding algorithms that are hard-wired in the brain, along with other hard-wired communication features (such as uses of silence to separate words, and use of relatively short words - all not strictly necessary but "natural" to us because of hard-wired speech processing algorithms). If I've never heard the word "rocket", but someone puts down a photo and says "rocket", I can be quite sure they are referring to the rocket in the photo, just because it stands out. When they put down a second photograph and say "rocket", I can be quite confident I have found what this "rocket" looks like - even though there are countless other objects/people/features in the photos and even though "rocket" may refer to the color of the sky or to the rectangular shape of the photograph or to the emotional state of the person showing me the photos or to any other number of things. Such rapid learning is based on intuition - since the photographs were human-taken, they hold the rocket in a prominent/focused/optically contrasting/emphasized portion, so when I see the photos I can tell that one object is emphasized over the others (which is probably commonly hard-wired in humans, similar to face recognition - ie if I frame something as a 'subject', others will also recognize it as a subject, as we share the same hardware, a deeper level than conscious learning which is built up as a result of that hardware). I can assume the object is what is referred to by "rocket" though at some level remembering all associations and then discarding most of them as inconsistent when I see the second photo, leaving me with highest confidence in the mental construct that "rocket" means the thing that is in the photos. Thus learning requires multiple stimuli (in the case of complex stimuli they must be close temporally so as to not fade from short-term memory) for the learner to make sense of presented information - not in the sense of sight and hearing as separate channels (though such intuitive separation is doubtless a key feature of our actual learning, since sight and sound often are correlated, but strictly there is no need to consider them as separate senses any more than different audible frequencies could be considered different senses - it is still possible to learn even if one human sense is available - for example through time coincidence of two audible tones or through pattern-finding in tones played over time, the simplest to us being finding vocal sounds punctuated by silence and mechanical/noise being continuous), but rather in the sense of already-known and unknown data, where the already-known is used to make sense of and to understand the unknown, to place it in context. This is obvious in the traditional, "dictionary-reading" based learning: a rocket is a long pointy metal object that flies vertically up. If I've never heard of a rocket before but know all the other words, reading this definition brings to my mind images/thoughts/precepts of "long", "pointy", "metal", "flying", "vertical", and just about every other word (except for grammar structure which is the non-entropic component and confirms that I am reading a valid phrase). Thus I build up a meaning of rocket out of a new association of previously known words. [f17] Yet such learning is rather dry and limited - the stimuli don't have to be so boring and obvious.

Consider that one of the "already known" stimuli in the above definition was the notion of a definition, as well as of a sentence, of words and letters, and of general language expectations, such as reading left to right and top to bottom. None of this is learned by a definition, rather it is intuited and confirmed to "certainty" throughout our life experience - a long and difficult process where the brain works 24/7 for many years to find patterns and make sense of what it is presented. And what the learning brain chooses to focus on is a uniquely human, hard-wired trait that *is* the "pre-known" data in all humans, even newborns. This is, in a vague sense, the way that people process the world and the types of patterns they can find and remember (if you don't expect a pattern, you cannot find it). With this "hardware" capabilities, the mother can tell that a child is focused on a toy/object and name it/describe it to the child, in human-created (and thus comprehensible) language, allowing coincidence-based learning to take place. Even the notion of focusing on an object is a uniquely human "pre-known" data that is essential in human-human communication, and something arising naturally in all cultures because it is part of what humans *are* and how they work. Coincidence-based learning itself uses the "pre-known" key of coincidence as being a signal for learning to take place, which is really an arbitrary assumption but one that happens to be effective in our world. This allows the brain to focus on the most relevant data, quickly discarding the plethora of other incoming data as noise if none of the "hardware" or "software" signatures of "pre-known" data markers are found (ie some pattern in time, loud/startling sound that gets us to pay attention (hard-wired), ignoring a gradient background, letters and words that make sense). The constant hum of the humidifier in the background is felt as noise - I'm aware of it only as an indication that the humidifier is still on, even if the sound itself carries way more information about the voltage level on the grid, the amount of water in the tank, the cleanliness of the boiling surface - I don't have the right hardware to discern these patterns. Yet if the hum suddenly stopped or changed nature of its frequency distribution, I would quickly become alert - this *is* within my hardware ability of pattern detection. And all *this* constitutes "pre-known" data used in communication and learning, in a basic sense.

Our brain is structured to make sense of the world using symbols like words and phrases (children will readily repeat phrases they don’t really understand, just because they see the adult saying a phrase and associate it with a given reaction/feeling), making arbitrary networks and associations. For instance, a human body and an animal head – unphysical but easily accepted as a mental construct. This works in a hierarchical manner so we find it convenient to use folders in a computer file storage system, or use different orders of approximation in science theories – ie the ‘tree’ is a part of our brain functioning. The arbitrary associations also apply to understanding of cause and effect: jesus died for our sins (why does jesus dying have anything to do with our sins?), similarly threats like "I will do this if you do that". From this I would argue for a relational model of human thought - our thoughts are composed of relations between different concepts.

Symbols - abstract representations of actual physical objects - are powerful concepts in human knowledge, and are key in our society's establishment and functioning. What gives symbols this special role? The key is in the compactness of numbers. For instance, I could write down the total number of combinations of a computer screen's pixels, even if it would take quite a bit of space. This number could also be seen as a single instance of a screenshot - with all numbers under it representing other instances. In the same way, I can write down the number of atoms in the universe, if only approximately. So, using a tiny subset of atoms, I can represent the total number of atoms, because I am taking advantage of a vastly larger space of atom combinations; same as number of pixels is lower than number of pixel combinations, so I can write the product width*height in numbers on any given monitor, quite easily. The number of combinations, on the other hand, can get arbitrarily large depending on what is considered a combination (with the monitor, I arbitrarily chose a single screen of pixels as a combination), and cannot be expressed within the system, as the system is defined by the combination that it actually takes on, thus by definition only being able to take on one state at a time (the buffer on my video card that gets used to generate the screen image is precisely large enough to store a number representing how many possible screen combinations there are). Because a system is defined by an arrangement of objects, any particular object can be easily represented by a small subset of the available arrangements - by an arrangement of a limited subset of other objects. Thus me writing "chair" with the atoms on this piece of paper is an arrangement which refers to an actual physical chair far away. If I had to describe the chair fully - its arrangement of atoms - I would have to use the chair itself, or a vastly larger set of symbols in the rest of the world. So, the use of symbols requires a discarding of most of the information about a described object. "Chair" is generic, arbitrary, by itself meaningless. Supplied by detailed construction manuals and the rules required to follow them (and the rules to understand *those* rules, so on to the "pre-knowledge" of above) we get closer to an actual physical "chair", though still unimaginably far from the actual atomic scale representation of the physical chair. This is key - the symbol itself is meaningless without a system/organism to consistently make use of that symbol. "Chair" by itself means nothing - it is yet another arrangement of atoms. But coupled with a human's consciousness, it can be used to affect behavior and allow for communication - altering another human's behavior. [f18] A computer can similarly process information, for instance classifying whether a picture contains a chair or not. Such knowledge, of the coincidence between object and symbol, or specifically, sensory input to a system and symbol, constitutes the "ultimate" shared knowledge in communication and delineates meaningful from meaningless statements - a signature of conscious processing as alluded above. The connections are arbitrary and must be learned through countless coincidence and inference judgements, done during childhood and even during sleep. The arbitrariness is interesting - a symbol cannot exist without an information-processing being to make use of it, and the arbitrariness means the symbol can come to represent whatever it is meant to describe without imparting any inherent qualities/qualia of its own. Perhaps thus any being that can learn/use symbols must be a conscious one - acting out real physical behavior based on unrelated, abstract symbols: coupling an arbitrary set of symbols to physical action, a sort of entanglement between sender and receiver. Might conscious qualia come from the conversion of a specific sensory input into a non-specific abstract form (or vice versa)?

We see the world not in its full complexity but only as well-processed symbols, so in a real sense we only see what we can comprehend. So there are things which matter to our perception, and things that don't matter. For instance, different fonts convey the same meaning although they may barely look alike by physical terms. Now as I'm learning Chinese writing I don't know what things/strokes matter, and can often make mistakes obvious to a native speaker but difficult for me to see - same with pronunciation, at first I would recast their spoken words into an English representation, losing tones and some sounds, and only with extensive practice/feedback/immersion am I able to start and process the sounds as they "are" and also make such sounds myself; I now am more or less able to discern different tones due to practice and being taught to pay attention to them. So we can choose what we see as important and what we don't, based on our upbringing - I grew up learning one language and for learning another I have to be once again taught what things to pay attention to (and what things don't matter, even though based on my native language I would think they do). So a repairman might see a problem where a layperson thinks everything is fine. What led me to this view is that in cartoons and drawings all sorts of different shapes of characters and faces are used, and yet we automatically know what's going on. Consider the case of glasses: taking pixel-level data that shows 'glasses' in different cartoon drawings will show barely any similarities, yet I instantly recognize them as glasses and find where they are located on a character's face/head. There must be some complex topology/visual processing rules that map the full visual complexity into the single idea of 'glasses'. [f19] Are the following glasses? What features matter? The 'hooks'? The line between circles? The lengths of lines?

[Glasses figure]

I wrote earlier on how some words seem to serve as an adequate explanation to a "why" question. I mentioned a satisfying stopping point is one of random/uncontrolled/unpredicted events: "Why do you like sports? Because I played a lot when I was a kid. Why did you play when you were a kid? That's just what kids do!". Recently I noticed that words can also serve as an explanative stopping point. There is "because god" or "because I said so" or "because we were in the right place at the right time". But there are also labels: "because I am depressed" or "because I am disabled" or "because I have OCD". Such labels summarize and generalize, but if not used in a rigorous manner can mean different things to different people (like "right" and "appropriate"). Even if used in a rigorous and consistent way, they perversely have *no explanative* power, but only *associative* power - associating to other words in a "thought cloud". So "I stayed in bed because I was depressed" glosses over tremendous details - the person's upbringing, character, mood, diet, other interests/activities, family relation, basically nothing. Just like saying "that chair" above gives practically no details on what the chair looks like or is made out of - just a label. And "because I was depressed" is no explanation when the real cause is myriad physical/chemical reactions we can't hope to track in that person's brain. [f20] We choose to stop there, for convenience. Why are you depressed? How did your last meal contribute to this? And any other deeper questions are more or less discounted, ignored. Physics/science is another such word, of course, but at least these claims are testable/verifiable experimentally - perhaps the only empirical truth we have. But still it leads to blunders in the energy field, like "green energy". Putting piezoelectric generators under sidewalks to get energy from people walking - sounds good! Why? Because "energy", and seemingly "free energy" is enough to stop thinking further, even for wholly competent adults (can't really blame them, as I still have no idea what energy is). Yet looking just a smidge deeper, it is clear that this energy comes from human digestion - possibly one of the most expensive and inefficient energy sources one could find. Just hook it up to the electric grid for fuck's sake. Same with solar panels on roads - "solar panels" is a good word in its own right so apparently people don't see the craziness of replacing cheap-as-shit rocks with expensive-as-shit fragile semiconductors that will mostly be covered by cars during the day. An even bigger scale fundamental blunder is with all of "renewable energy", for there is no such thing! Whatever energy we extract, from wind or from solar or from tides or from salination, it is energy that otherwise would have been used towards some natural process - perhaps one useless to us (like erosion or material damage) but then again perhaps not. We *never* can get free energy without altering our environment, without sapping it from some other system. That big picture of interconnectedness is what we miss by stopping at words like "renewable energy" (and the flimsy argument of "it's just a little tiny bit"="it's nothing" is a thought-stopping concept in its own right).

Systems which are set up "precariously", on an entropically unlikely state (too perfect/clean/sterile/symmetric) will be susceptible to perturbations that create some new addition in the system and break the symmetry. Such susceptibility increases the more "pure" a system is, so ideal or perfect things remain a mathematical abstraction, at least as long as any (thermal) perturbations are present. So, vacancies form in "pure" crystals, and we cannot have perfect crystals. So, in a utopia without crime, criminals will start appearing as the benefits are great and punishment non-existent. So, a baby raised in a sterile room will get all sorts of illnesses when exposed to the outside environment. So, a supercooled liquid will spontaneously solidify when a good enough nucleation site is found. So, a conscious experience will arise when a brain's connections are set up to support such a pattern and a perturbation of a sensory qualia comes along. This is in line with [Dehaene's] phase change argument for defining consciousness as well as [IIT]. Consider what this means: when the brain is set up in a given way, poised for an avalanche discharge as it were, like a photomultiplier but in a loop, an outside perturbation will induce a consciousness, which will quickly access the brain's memory and realize a sense of self, giving me the illusion that the "I" of now is the same person as the "I" of a few months before, even if my conscious experience disappears/reappears along the way. Multiple transistors, set up in a loop are undefined and can "collapse" to one configuration or another, perhaps such an arrangement can house a type of "extremely large quantum particle" or perhaps it can be fully explained clasically, but nonetheless the ensuing evolution would seem strongly to lead to conscious experience. Consider the ZVS oscillator circuit, which had such an arrangement, using 2 transistors each connected to each other's base, the initial state is strictly undefined when power is applied. Yet soon enough an oscillation is established as one pattern of spontaneous oscillation is amplified and its opposite is suppressed by the amplification itself. So perhaps here we have the simplest conscious experience - an optimization of energy dissipation under the constraints I imposed on this system by wiring it so that both transistors can't simply be on at the same time; each time the potential shape shifts from one transistor to another, there is a 'phase transition' in qualia space, giving rise to a felt qualia. What it feels like I have no idea, though I can imagine this qualia has no concept of space or time or self much less the rest of the world, but is a pure abstracted feeling of change/oscillation.

As mentioned earlier, one cannot find a pattern unless one knows to look for it - a sort of circularity, where even though the pattern itself can be arbitrary we must have a set of rules/algorithms for finding its presence in the (arbitrary) data. I believe that the brain has unconscious pattern-finding abilities that are well suited to solving survival problems, and these abilities are also used to learn language, solve math problems, make scientific discoveries, and makes sense of the world as in philosophy. These processes look for common elements but obviously there must be more complex abilities, perhaps even learned ones, as seen by mathematical "insights" reported after waking up or a long refractory "unconscious work" period. During sleep, the brain replays the day's events and carries out such processes to find patterns - dreams may give some indication of the methods used to do this (though should not be taken as the "whole story" because they also alter the short-term memory and our time perception/reasoning ability so introspection becomes shaky). Then we find that, with the powerful pattern-finding machinery of the brain, consciousness is largely an artificial construct of short-term memory access - we are not 'continuous' beings but effectively "die" and "return" at every night or with anesthesia (or maybe even every few seconds as 'new' thoughts are created), when conscious operation shuts down. Our brain's memory gives the morning's new conscious emergence a working state of mind that creates the illusion of continuity and a constant sense of self - even when passing out unexpectedly we may lose continuity but not the sense of self - when waking up I still feel like "myself" from earlier, with same wants and preferences and physical abilities. But this means our qualia are also illusions of the moment - not permanent nor carrying a greater meaning. In essence my life might as well end, it won't make a difference to "me" because there won't be a "me" when it ends. The world may go on as before but I don't care - I'll be dead! So feelings of fear, guilt, hesitation are all illusions that don't matter to my survival in this world. From this point of view, because life involves suffering, and no overarching absolute goal or meaning, the most rational step is to commit suicide as early as practical. At any time the infrastructure may crumble, the water and oil may run out, the food may not grow - and then people will bevome vicious and I will find myself trapped in a cage with lions - except there is no cage and the lions are free. Why live in fear of catastrophe, making preparations for survival if survival doesn't matter - when there is no light at the end of the tunnel, because the light and the tunnel are our misguided illusions about "us" being actual beings and not just elaborate patterns in a brain circuit? Do we assign meaning to the lives of bacteria and fish and trees? For these are also living things, chemical reactions preceding us, established by entropic principles (just like defects in crystals - no pure crystals, no pure earth) with the excess of entropy/energy available from the sun. We must live out our lives to increase entropy; that is the closest thing we have to a purpose. There is no need for "I".

[f1] the presence of monsters in stories is scary as it is a clear harking to the baby's interaction with a wholly powerful parental figure and implies the baby often lives in primal fear of the big "monsters" around it

[f2] The brain's automatic filter on [matching previous beliefs=correct] makes it hard to convince people to change their beliefs, this process leads to strong mental discomfort and anger/stress/frustration before re-learning can take place, a true cognitive dissonance.

[f3] some recent discoveries of killer whale hunting behaviors show that behaviors can be learned and then spread in a group of mammals ie it is not all "genetic programming" (and I would argue, there is very little genetic programming even possible - the programming is of the brain structure, the learning and interactions with the real world transmit the bulk of the brain configuration information)

[f4] the epic tale is repeated in form across cultures+time and is certainly still active; buildup and resolution also happen in non-story elements like songs and dance and firework shows, and of course sex. Perhaps it is a reflection on the brain's deepest programming to stalk/hunt (build appetite)/devour

[f5] to be complete, the absolute amount of information in the algorithm + compressed data is compared. A simple algorithm + easily compressed data is low information content, a complex algorithm + minimally compressed data is high information content. Once a physically relevant way to specify algorithms is given, the resulting information content should be constant regardless of algorithm choice.

[f6] If you don't know a pattern exists you cannot look for it! Computers can afford to look for any realistic pattern (to a practical extent) and find it - the real issue in having companies with AI/machine learning and serious 'big data'.

[f7] to be sure, it is not really a slot as that implies each memory is independent, but a subset of nodes in an interconnected network. These nodes could be shared between many different concepts/memories, and could be gradually overwritten as new memories are formed

[f8] as in [Kahneman] the reported unpleasantness of a painful stimulus wasn't based on the mathematical integral over time, rather the intensity of the stimulus was strongly taken into account

[f9] though the two cannot be separated - the type of memory structures dictates the type of processing that can be done on it, and vice versa. The same applies in physics theories/models - the structures that models use to describe the world are interlinked with the types of computations/predictions the models are capable of

[f10] just because we can only keep a few digits in mind this does not mean our memory is only a few bytes. More directly, we can only keep a few concepts in mind - and digits are particularly limited and boring concepts so they carry relatively little information; we could remember a lot more byte-level information by referring to richer concepts

[f11] this lack of feedback sets up significant cognitive strain (which the students are told by peer pressure to ignore and eventually accept) - of the same sort as anticipation of election outcomes, a seeking of explanations/order/logic - ironically this most valuable (to the schools' purported goals) personal-level thirst for understanding is thus destroyed by the schools

[f12] and in this way, the US system of judging teachers' performance by how well their students score on standardized tests, just doesn't make sense, as it puts the priorities on all the wrong things - tests, lack of individuality/creativity, and expectation for the teacher to "manufacture learning" rather than for the student to learn. I am not claiming this is bad in an absolute sense (there is no good or bad in an absolute sense) - but that it is not in line with what is openly claimed as the outcomes/goals for such a system. This is emotionally hard-hitting for me because I always took words at face value, but really such disconnect between claimed and actual purposes and effects shouldn't be surprising - as alluded earlier, words are cheap and derive their meanings from their effects on people, not from what's in the dictionary. And the disconnect need not be conscious or purposeful, just stable

[f13] or even photographic memory - photographs become reduced to common elements, color/lighting corrected, and "hashed" for robust recognition

[f14] same as too-high responsiveness systems behave effectively randomly, like the "electronic dice" which is just a fast counter that stops when one presses a button

[f15] more generally, formulas and experience (look-up tables) are used by trained operators such as in ship steering and artillery positioning, to serve as the temporal match

[f16] True secure encryption will make this last step into an algorithm that takes place inside the recipient's mind - this way we avoid the trope of using 2048-bit keys and the latest hardware protections just to have an adversary read the secret message by looking over the recipient's shoulder - this is also why old-school spying will exist as long as society does, as ultimately it is humans who must act on the secret message

[f17] that I can do this, and that I am content with such a definition is a very interesting reflection on memory structure. For example to make a new file on a computer, I would have to supply all the file information. But to learn a new word, all I need is other words that "describe it", or that "are it" in mental space. All the "files" in our brains are actually just links to other "files" - in fact there are no "files", just links. The types and shapes of networks formed by these links, and which parts of them become activated and how they interact, is what gives rise to different concepts and sensations/qualia. The computer CPU operates on bits in memory, and surely there are structures in the brain which can also store/manipulate bit-level information over time, but what we experience is not the bit-level data but the links between the storage 'boxes' as guided by the brain's wiring and the way it activates affected by brain chemistry. What we experience are transitions in data, the dynamic information couplings within the brain. This continues to be in line with my interpretation of conscious experience as field/potential shapes that cause dissipation along an 'optimal' route.

[f18] in this sense, communication (using mutually understood symbols) can be said to create a larger conscious system - setting up dissipation-optimizing networks that have capacities far beyond the individual human's

[f19] computers are catching up now, with object recognition ability, and the way they are programmed to do this may give interesting insight into how our brains do it

[f20] yet, information theory tells us that any information output must be some function of information input into a closed system, so even though the brain may be a "black box" we can still analyze it from outside and make models of its actions. This is a basic principle used in all of science.

«« 4. Living in Society    1. Brain Operation    2. Thinking Clearly »»