Intro to Extralogical Reasoning Part 3: Understanding Self-Ignorance: A Primer for Understanding Yourself and a World you weren't Designed to Comprehend
A sentient being is a specimen that can contemplate its existence and plan for manifold futures. Humans easily qualify as sentient and, therefore, are called self-aware. The common, and rather questionable, implication of awareness here is UNDERSTANDING. In part 2, I showed that knowing and understanding are by no means necessarily the same. To go from awareness to understanding is a rather big jump. But given the nature of evolution, it’s easy to infer that self-awareness, or ANY form of awareness, couldn’t arise in such a process without a species being preprogrammed to vastly overestimate it; and if that species were designed for any degree of self or general UNDERSTANDING at all, it would only be in the very narrow sense as pertains to SURVIVAL in their evolutionary environment--and ONLY in their evolutionary environment.
The feeling or belief in understanding and rationality is often as important as the real thing, both on the group and individual levels. Confusion and uncertainty, especially in the wild, is almost as bad as understanding is beneficial. It’s been my experience that in life, it’s usually better to have a solution, outlook, strategy, etc. that’s just good enough but one you’re comfortable with and feel like you understand than one markedly better in theory but without the latter two. Evolution is not unaware of this, and simplifying models and perceptions of reality to these ends fits well with a process that follows the path of least resistance like natural selection. The inclination to rationalize, especially after the fact, is powerful enough that it exists even in the absence of morality and apparent social objectives. As extensive research shows, rationalization is actually more of a COGNITIVE phenomenon than an emotional one.
While humans can theorize about their evolutionary environment, they can barely speculate about actual reality. The human sensorium selectively responds to phenomena in the physical Universe that is processed by the subconscious, then filtered by other portions of the subconscious to create a perceived environment that’s good enough for the organ and its appended body to survive. It is only here where the conscious mind, a mere ten percent of the thinking organ and a much smaller percentage of the overall body, begins the process of attempting to understand itself.
Unlike pure logic machines, sentient thinking organs can “cheat” to compensate for the constant paucity of information (and intelligence) by thinking holistically and engaging in various forms of qualitative reasoning. This allows sentient thinking organs to ascertain right answers that logic machines can't, but in gaining this capacity, they necessarily gain the concomitant capacity to be WRONG. While PROGRAMMERS of logic machines can make incorrect assumptions that lead to wrong answers from the programmers' point of view, PURE logic machines themselves (that is, those without appended referential libraries) don't even know there's SUCH A THING as wrongness, only non-answers (i.e. non-computable inputs).
Now, not only can sentient thinking organs be wrong by accident; since they have a subconscious, they can be wrong ON PURPOSE--that is to say, they can cheat to delude themselves, as well. And you can't get around this simply by adding more raw intelligence. The more powerful the “understanding machinery,” the MORE powerful it’s inextricably entangled delusionary counterpart: All else being close to equal, the smarter a specimen, the better they are at apprehending truths, but also the better they are at creating packages of beliefs to protect themselves from truths they’d prefer to be oblivious to.
As mentioned in part one, ninety percent of the human thinking organ is primitive, unconscious, and self-inconsistent; to some extent, you can think of it as the combination of the thinking organs of humanity’s many primitive ancestors. Just as the understanding and delusionary machinery are inextricably entangled, so too is the logical and emotional/primitive. Evolution doesn’t integrate; it assimilates. As discussed by Johnathon Haidt in The Righteous Mind, it only adds features to enhance or complement what’s already there. Thinking, at least to some extent, depends on the involvement of the emotions, along with all its pitfalls. The lack of certain emotions is believed to be the cause of the poor judgment observed in clinical narcissists, sociopaths, and psychopaths, which have no correlation with IQ. At the same time, even slightly mismanaged emotions can obviously lead to its own set of problems.
Finally, the thinking organ almost never, if ever, has access to a viable means of truly proving something; it’s limited to a SIMULATION of it: CONVINCING itself and other flawed thinking organs. Johnathon Haidt and others have shown that contrary to popular belief, people aren’t designed to make decisions rationally so much as to CONVINCE themselves they make decisions rationally. Proving to yourself you understand something is usually a lot easier than convincing others, which, in turn, is usually easier than ACTUALLY proving it (though there are many exceptions, like the near-impossibility of convincing humans they're horribly wrong about their understanding of themselves). Thus, the convincing is always done by something with a preferred conclusion, placing the thinking organ in an inescapable conflict of interest.
Disregarding the many impurities and ambiguities of “natural intuition,” many falsely believe it is something akin to the common thinking machinery used to UNDERSTAND THIER CURRENT ENVIRONMENT. ER asserts that natural intuition is closer to the common thinking machinery used to MAKE SENSE OF A MUCH SIMPLER ENVIRONMENT FOR SURVIVAL (see the works of Duncan Watts). This necessitates a puzzle of life ( POL): a thinking scheme designed to compensate for the disadvantages of the common thinking machinery. ER is my solution to the POL
This post, the concluding portion of the intro to extralogical reasoning, will elaborate on the weaknesses and limitations of the human thinking organ and its many difficulties in understanding its present and highly complex non-evolutionary environment. But in doing so, it will show that even in a thinking organ so destined for ignorance, there is hope for understanding—that is, in UNDERSTANDING one’s ignorance.
As shown by a plethora of the author’s experience and scientific research:
People are overly prone to project their qualities into others; they tend to over-rely on prior experience to understand similar circumstances; they too much think their reality is ALL reality; they don’t actively distinguish WHAT they observe and how they INTERPRET what they observe, meaning they blur the two together; they have preferred beliefs that support their motivation, ego, self-esteem, and other beliefs; they are predisposed to apply their beliefs in order to reinforce them; they’re notorious for applying one set of motives and thinking to their decision-making processes, then unwittingly applying a completely different set to explain them; they’re essentially programmed to reflectively commit certain fallacies, such as the converse, inverse, and base-rate fallacies; their memories of events tend to center on how they interpret them (both before AND afterwards) rather than what actually happened; their psyches are more social and emotional than intellectual; they hate randomness and uncertainty and are inclined to deny their existence whenever they feasibly can; and there’s a long list of identified universal biases--e.g., the conformation bias, the availability heuristic bias, and the survivors bias.
ER defines confusion as Dissonance in thought space—a math-like space that comprises all there is to think about (thought space comes from Bishko’s life engineering). Resonance, the opposite, results from thinking organs reflexively simplifying reality by twisting related events into causational relationships, helping fit reality into a "harmonious narrative." Consequently, mammals see reality through an artificial lens of cause and effect, leading to causational fallacies and the tendency to blur observations and interpretations. I call this "lens" artificial causality. It and artificial Rationality are the primary components of Resonance.
While most of Resonance is (/would be) attributable to traits left over from pre-sentient ancestors, as opposed to specifically selected for in proto-sentients, the thinking organ remains unerringly designed to turn "the what," "the how," "the related," and "the might have been" into "WHY." Ultimately, this allows social animals to create beliefs and models that avoid confusion, maximize their motivation, and conform to their pack’s mentality. This may not sound terribly dignified or philosophically sound, but ultimately, evolution only cares about whether specimens pass on their genes.
Through the mid-twentieth century, scientists believed that the flaws in human cognition were restricted to the need for animals to “act and react” or “fight or flee” while avoiding confusion and distractions in dangerous environments. Such programming does exist, and still has applicability in today’s world; but this was only part of the picture. In the 70’s, Daniel Kahneman and Amos Tversky identified thinking and decision-making mechanisms more related to the newer (conscious and unconscious) portions of the thinking organ, including those involved in analyzing statistics. These mechanisms became known as heuristics and cognitive biases: innate rules-of-thumb and, often resulting, biases. Like the other attributes mentioned above, heuristics and the like are essential for doing virtually everything. But in the end, they’re mostly a means of performing guesswork, are very easily misused, and tend to apply more to decisions, which are mandatory, rather than OPTIONAL opinions (see part 2).
Extralogical reasoning provides analysis on many known biases, as well as its own biases and phallacies (extralogical reasoning fallacies), but this post will center on those relating to artificial causality and reductionistic reasoning. These are especially problematic or misapplied in today’s world where there are countless relevant variables, many of which are unknown, that interact in intangible and unpredictable ways. Survival in humanity’s ancestral environments didn’t require accounting for anywhere near as many factors that are necessary for understanding the modern world; nor did those environments require specimen to FILTER THROUGH even a tiny fraction of as much information. In fact, in some ways, for reasonings already discussed, taking account for more variables or information in evolutionary environments may have DECREASED their overall fitness. And as the famous economist Thomas Sowell says, the bulk of the variables that affect modern society are unknown, many of which can’t even be ARTICULATED.
Knowing the wrong combination of important pieces of information about something can be highly misleading. The availability heuristic bias is the natural human tendency to overestimate the importance of the information one has about something and vastly underestimate the potential importance of the information they DON’T have--in other words, how easy is to mistake a piece of knowledge for the whole picture.
Artificial causality gives rise to the casual defects: the tendency to underestimate the number of variables/factors involved in causation, their intangibility, the complexity of their interactions, and the power of self-organization, or the proverbial "invisible hand." Most of the Universe--from the formation of galaxies to planets, to ecosystems to the human brain, to industries and economies—comes from things organizing themselves. As has been said, “In a complex system, man INFLUENCES almost everything but CONTROLS almost nothing.” The casual defects, with help from the others, leads to the vast overestimation of the world’s predictability and how much it's directed by people in power. This is reinforced by the ease to "explain" occurrences after the fact. More facts are known in the future, and there are enough things going on in a complex system to find numerous plausible explanations for every correct one, making complex systems RETROSPECTIVELY "predictable" and creating the illusion of PROSPECTIVE (true) predictability. Since it's much easier to HARM a complex system than aid it, treating them as more than guessable is dangerous, explaining the persistent failures of central planning and similar governmental intrusions.
Another consequence of artificial causality is what Nassim Taleb calls the narrative fallacy: the near inability to follow something without it being put into a causal narrative (Taleb said the narrative “fallacy” would be better described as the narrative “fraud”). Complex information is difficult to convey without an established relationship between facts and their causes. Taleb says that writers and the media are obliged to force an “arrow of relationship” into stories that binds facts to explanations, giving them an artificial simplicity that’s misleading or even flat-out wrong. The narrative fallacy, the causation bias and causal defects, and knowledge idolatry are heavily fostered by schools at all levels. To fit topics into classroom curricula, educators must contort Nature in a “reality” that’s well-behaved enough to have ascertainable answers and explanations, which is psychologically reinforced by the idolatry of the methods used to reach them. This cultivates an oversimplified sense of the real World, where ascertainable answers, explanations, and predictions are typically the exception. The liberal intelligentsia and their self-anointed role as society’s third-party surrogate decision-makers is rationalized, in large part, by a causation-biased sense of “logic,” which has not been without catastrophic consequences (see the works of Thomas Sowell).
A phenomenology is something readily observed but not adequately explained by known theories/explanations. The human and natural worlds are filled with phenomenologies. The phenomenological phallacy occurs when someone dismisses data or its default conclusion because it can't be explained in familiar or satisfactory terms. Since the relevant (or potentially relevant) factors/variables involved in causation are numerous, intangible, and interactive, many things can't be known without looking at DATA. Unfortunately, speculation is the more natural and common means of inquiry, and the scientific method is poorly applied to life (and less to science than most think). Hypotheses about people's potential suitability for activities, environments, and relationships, etc., for example, are often based too heavily on strong attributes/resources, not enough on weaknesses, and almost not at all on how they fit in with EACH OTHER. Treating the factors/variables appropriately won't necessarily yield a more accurate or satisfying answer, but you don't necessarily need one: Having good judgment has less to do with ascertaining truths than it does with avoiding consequential misbeliefs. And if ascertaining a definitive answer becomes necessary, get data, or at least make sure you don't suffer from the lack of it.
The survivor’s bias is the tendency to look too much at why the winners/survivors win and not enough at why the losers lose—made all the worse by the fact that the exploits of the winners are more likely to be reported. For example, when a few people become rich and famous with a certain business strategy, many rush to try it without looking to see how many used it and FAILED—only to discover that hundreds MORE went broke just after they themselves met the same fate.
The confirmation bias is the universal tendency to confirm what you already believe by having a better memory and greater desire for confirmatory evidence, something heavily exploited by the media and, especially, by the countless schemers spreading false information on the Internet. Predictions/prognoses have a confirmation/survivor’s bias. People are constantly making them, and the ones that come true are more likely to be remembered, reinforcing misconceptions about the predictability and simplicity of the modern World.
People have an inherent lack of appreciation and intuition for statistics—which, of course, is essential for understanding the modern world. This includes many mathematicians, as was demonstrated by Kahneman and Tversky’s research and the now-famous “month hall problem.” Extralogical reasoning defines probability as the study of how conditions (or factors) affect the frequency and likelihood of occurrences and statistics as the analysis of probabilities. Though it’s often an indispensable MEANS of understanding causation, statistics performs analysis characterized by randomness that only establishes CORRELATION (regardless of the nature of the actual processes themselves). Reinforced by the frequent confusion between a means to an end and the end itself, the causality-fixated thinking organ twists correlation and randomness into causation, all but ensuring misinterpretations.
Statistics and probabilities are highly sensitive to conditions; change them even slightly and you can get very different answers. Proficiency requires taking into account or making sure you don’t NEED TO take into account numerous factors. Making matters more complicated, the same relevant conditions can be conveyed in very different ways, and people are highly sensitive to wording. Therefore, understanding the question or scenario becomes more difficult, and certain "associative reflexes" lead to impulsive interpretations, such as replacing the question with a similar one with a different answer.
To look at statistics without looking at conditions is to not look at statistics at all. But disregarding or cherry-picking conditions allows schemers to manipulate “condition oblivious” people. And it’s a double whammy because the schemers themselves are ALSO prone to misinterpret statistics, making it easier for them to be willfully ignorant of their own machinations.
The anecdotal bias refers to the tendency to overestimate the frequency of an event because you’ve heard a lot of stories about it, especially if they’re sensationalized. For instance, some people actually think it’s more likely that you’ll be killed by a cop or terrorist than in a car accident.
The representative heuristic bias is best explained by example. Suppose you described someone who sounded like a stereotypical mathematician, nerdy glasses and dress, disheveled hair, etc.; then you asked them, “What’s more likely, that the person just described is a mathematician or a musician?” Most people would say mathematican based on how REPRESENTATIVE it is of the description—but they would be wrong. What they probably wouldn’t have taken into account, or tried to find out, was that there are far more musicians than mathematicians. They focus too much on representation and not enough on general frequency. The representative bias leads to the base rate fallacy, as it did above.
Extralogical reasoning calls the quantity heuristic bias the tendency to confuse quantity with percentages. A small percentage of a sufficiently large quantity is still a large quantity, but if something exists in high quantity, people often assume it happens more frequently or a higher percentage of the time. For example, if only seven percent of government officers in the U.S.—including law enforcement officers and government officials at all levels of government—were corrupt, the total would be comparable to the population of an entire city. This, combined with the anecdotal bias and the underestimation of how well things (i.e., a scheme) can organize themselves (the causation bias), can give people an exaggerated view of the calculated corruption in government.
Research has shown that the above cognitive weaknesses have minimal correlation with a person’s overall intelligence; nor is there much correlation between intelligence and the power of emotions, which, if mismanaged, can have such a profound effect on the quality of someone’s thinking it requires no explanation. Many of these cognitive errors may be attributable to an aforementioned "associative reflex," a primal impulse that makes connections and acts upon it impulsively. In the case of the representational heuristic bias, people associate the image of a nerd with a scientist or mathematician and jump to that conclusion; with the converse fallacy, people assume, for example, that because a Big Bang cosmology requires a type of microwave background that the latter automatically means the former, which it does not. As also mentioned, all else being close to equal, smart people may be better at apprehending correct answers, but they’re also better at creating packages of beliefs that protect themselves against undesirable truths—which is also something studied and confirmed in the field of cognitive psychology.
One should not underestimate how essential social and intellectual supervision and correction are in ensuring the quality of human understanding. Take science and engineering, for instance. In addition to the fact that people’s psychologies are less directly involved than in life-related matters and that practitioners don’t have to account for as many unknown variables, these fields are socially-validated areas with lots of social structure and competition (which people tend to be hopelessly over-reliant on for their discipline and motivation), and there are very precise ways of measuring their understandings. These factors make science and engineering very self-corrective, effective at correcting their own mistakes.
In fact, human irrationality has ways of benefiting these fields.
Bishko once said, “People don’t want be successful so much as be more successful than OTHER PEOPLE.” This is called competitiveness. Many things in this universe benefit from competition—the evolution of animals and ecosystems; sports and arts; as well as industries, businesses, and economies. Academic fields are no exception. Ninety-nine percent of the true history of science is the story of geniuses who spent their entire lives developing theories that ultimately proved to be ninety-nine percent wrong. And this is not just in light of subsequent discoveries or hindsight; even at the time, these theories were questionable. But in order for the right (or closer to right) schools of thought to have the right competition and catalysts, you need smart people believing in questionable ideas. All this makes science and engineering very self-corrective, effective at correcting its own mistakes. Don’t let the survivor’s-biased nature of popular history fool you: Science rose to glory on the backs of its members failures far more than by standing on the shoulders of giants.
Many have attributed the East’s failure to keep pace with the West in scientific progress to a lack of myopia and competition—and not just because of the shortage of it among the Asian nations. The collectivist mentality of the East is believed to have inhibited myopia and curbed the competition between scientific schools. Understanding things from different points of view speaks well of the wisdom of the individual, but it turns out that it’s probably better for a field long-term to delegate specific viewpoints to separate schools, each myopically fixated on their own theories.
Moreover, scientific discovery requires a myopic doggedness that wouldn’t be possible without the capacity for irrationality.
Does wisdom hinder scientific progress? Not necessarily, but at minimum, progress and proficiency require the ability of practitioners to at least temporarily deviate from what would otherwise be considered wise and rational thinking.
Counterintuitive though it may seem, in many respects, people are better designed to understand math and physics than themselves and the world they live in. These subjects are "hard" because one's understanding can be so rigorously put to the test. This is made possible by less known and unknown variables that undergo comparatively straightforward interactions. You might not think of sociology, psychology, and economics as "rocket science"--and you shouldn't: Humanity is a lot better at rocket science (as remarked by Duncan Watts). Space programs can put unmanned rockets in orbit around Mars and predict the magnetic moment of a tiny electron to one part in a trillion; they certainty can’t do the equivalent for mental illness, urban planning, and forecasting the stock market and economy.
Clearly, people aren’t entirely logical by nature; they have something extra—a psychology. That something EXTRA, however, allows the HTO to fully comprehend the rules of arithmetic, perform all known forms of qualitative reasoning, and think holistically about complex scenarios despite limited information (even if “scientifically holistic thinking,” as I call it, isn’t entirely natural to them). And most of all, as shown, the HTO can gain meaningful and useful (even if limited) knowledge of its own ignorance and weaknesses—and use it.
Therefore, LOGIC requires something extra. Extralogical reasoning combines standard inductive and deductive reasoning with PASSIVE management of weaknesses by familiarizing students with many common mistakes; it ACTIVELY manages with belief policies or pragmatic preferences (discussed in previous posts); and it fosters the “scientifically holistic” thinking essential for understanding the HTO and the many complex systems of the modern World.
Thus, THINKING requires extralogic.
Comments