The Holism of Life Management and Learning
The thesis of The Epistemology of Irrationality was that insanity/irrationality is a lack of integrated understanding of one’s overall body of beliefs, necessitating holistic management of one’s thinking and psychology. I posited that it’s wise to believe that the definition of rationality is not being irrational, making the post a treatise on both. The Holism of Life Management and Learning compliments that treatise. It reintroduces the puzzle of life (POL) as a complex system of variables, explaining why it can only be understood by holistic analysis. Since ER is my solution to the POL, this will also serve as an informal intro to extralogical reasoning.
First, one must understand the origins of the need for a POL.
In the third part of the ER intro, I asserted that self-delusion was an inevitable consequence of naturally selected self-awareness. In other words, even ignoring the total impossibility of a self-organized process creating a perfect thinking organ, the idea that evolution would or could produce a sentient species that was completely realistic about their understanding of themselves and their environment is preposterous. The feeling or belief in rationality and understanding is often almost as important as the real thing, both on the group and individual levels. Confusion, uncertainty, and distractions can be as harmful as understanding is beneficial. In life, it’s usually better to have a solution, outlook, model of reality, etc. that’s just right or good enough than one markedly better in theory you’re not confident in; and the selection for the tendency to develop such solutions fits well with a path of least resistance process like evolution.
Being able to prove your answers is a rarity, if it exists at all. Thinking organs are limited to a SIMULATION of proof: CONVINCING itself. This puts it in a perpetual confliction of interest. The human thinking organ (HTO) isn’t designed to make decisions rationally so much as to CONVINCE itself it makes decisions rationally. As Jonathon Haidt and others have shown, rationalization is more cognitive than emotional; it’s something that the HTO does reflexively. Intelligence can’t get around it, either: making the overall machinery smarter necessarily makes the machinery used for delusion smarter. Nor can you disentangle the thinking machinery with the emotional. Evolution doesn’t integrate; it assimilates. It only adds functions to the extent to which it enhances or complements what’s already there; what was already there, prior to the rise of sentience, was mostly primitive.
ER defines confusion as Dissonance in thought space—a math-like space that encompasses all there is to think about. Resonance, the opposite, is important enough that evolution selected for cognitive properties that make models of reality based on ARTIFICIAL Resonance. Animal thinking organs twist related events into causational relationships, helping fit reality into a "harmonious narrative." Humans, resultantly, see reality through an artificial lens of cause and effect, making them prone to confuse correlation and causation and observation and interpretation. I call this "lens" artificial causality. Artificial resonance is comprised of artificial causality and artificial rationality. While most of Resonance is probably attributable to traits left over from pre-sentient ancestors, as opposed to ones specifically selected for in proto-sentients, much of what people think of as reality is the result of artificial Resonance.
Disregarding the many ambiguities and impurities of the notion of "natural intuition," many falsely believe their natural intuition is something akin to the common thinking machinery used to UNDERSTAND THIER CURRENT ENVIRONMENT. ER posits that it's closer to the common thinking machinery used to MAKE SENSE OF A MUCH SIMPLER ENVIRONMENT TO SURVIVE (see the works of Duncan Watts). Such a flawed design imperative necessitates an anecdote in the form of a POL: a thinking scheme designed to compensate for the disadvantages of the common thinking machinery. ER is my solution to the POL that I share with those on my blog.
My co-blogger, Q, commented on my first ER post that I wasn’t giving the HTO enough credit. Even disregarding all the wonderous things it’s achieved in science, engineering, and language, it performs a staggeringly large number of actions to perform some of the most mundane tasks. When roboticists started coding prototypical robots, they were blown away by the complexity of the most basic sensitometer programming (involved in sensory perception and motion). This relates to Moravec’s paradox, the fact that sensitometer tasks require more computation than reasoning. Just doing a quick shopping at a grocery store requires hundreds of unconscious actions. This is true and very important to know.
That said, while I’ve met a few people who underestimate their abilities/potentials at specific well-defined tasks or their intelligence generally, hubris is still much more common, and I’ve never met a human being that underestimates their self-awareness or their understanding of their environment. It probably isn’t even POSSIBLE. People do sometimes discern truths harmful to motivation, yes, but the last thing a person need worry about is being too self-aware in general. When you hear, “I don’t know,” it’s usually because someone doesn’t want to give or explain the answer or the question is subjective or hypothetical—or some combination of these things. Only in rare cases are “I don’t know” or “I don’t have an opinion” the result of genuine wisdom.
In short, everyone overestimates their understanding of themselves and their environment, and resultantly, everyone should view their thinking organ as foundationally flawed and in need of a POL. This POL must reflect the need for a holistic view and management even though the requisite thinking isn’t entirely natural to people, requiring even more rigorous management.
As I hoped you can see, these flaws oversimplify causality and, in doing so, encourage reductionistic reasoning: reasoning that assumes (usually but not always falsely) that the whole, or a collection of factors/variables, can be understood just by analyzing the parts, or individual factors/variables. It analyzes the PARTS. This is the opposite of holistic thinking: analysis that assumes both parts and whole can only be truly understood by looking at how the parts FIT IN WITH EACH OTHER. It analyzes the WHOLE. It also takes account for unknown and unidentified variables. This, in turn, requires intelligent humility and informed ignorance; that is, knowing and managing what you DON’T KNOW.
Now I will elucidate the need for holistic thinking and many of its applications.
The modern World is more complex than humanity’s ancestral environments. It involves more variables/factors, including unidentifiable ones, that undergo more complex interactions. Life is a complex system, and you can’t understand a complex system without holistic thinking. Some things may be more important than others and that may vary from person to person; but in the end, management must be applied to all aspects of life.
Nothing in life exists in an isolated universe: Beliefs, attitudes, decisions, attributes, resources, information, goals, models of reality etc., all exist and interact with themselves and each other in complex and dynamic ways. This includes those from the future, which can only be guessed. A person who understands their life knows how these things fit together—and can intelligently compensate when and where they don’t. Thus, there is no such thing as a good decision or conclusion without knowing the circumstances for which it’s made. It’s at least wise to believe the same is true of all the other things listed above, as well, irrespective of how “good” or “bad” they may be in principle.
Bishko was famous for saying beliefs come in packages that reinforce each other—how could they not be? Nothing in life exists alone; it all interacts. And if you count unconscious beliefs, your beliefs are never logically consistent--how could they be? As I’ve said ad nauseum, ninety percent of the human thinking organ is primitive, unconscious, and self-inconsistent; to some extent, you can think of it as the combination of the thinking organs of humanity’s many primitive ancestors. But they can be made a lot more or less consistent. Insanity is an extreme lack of integration among a person’s body of beliefs. This is evidenced by pathological inconsistences between someone’s intellectual understanding of issues relating to their rationality and their WORKING understanding of them.
Delusions, bad instincts, bad attitudes, and misconceptions are, or are related to, beliefs. Everyone has them in one way or to one degree or another—and would have more if they weren’t doing certain things right. Many of these things are subtle; they’re in the unconscious beliefs and the biosap: evolutionary precursors to beliefs that aren’t tangible enough to be called right or wrong that are used to unconsciously model reality. Despite what people think, including Steve Bishko the man, the HTO is a biosap-dominant thinking organ. A person who’s good at learning from their mistakes instinctively understands this, and they know that every time they make one, there’s a good chance it will offer them a glimpse at what some of these misconceptions are. Such a person can connect individual flaws and mistakes to greater problems with their thinking and relevant understandings. They don’t just learn the face-value lessons from their major mistakes; they can also draw larger lessons from mistakes that are in themselves trivial and/or excusable. If the definition of a person who’s irrevocably insane is someone who can’t learn from their mistakes, someone with a tendency to view all their mistakes at complete face-value is likely insane. They’ll tend to think: “Oh, my mistake doesn’t matter in this case; therefore, I don’t need to worry about it” or, “I’m already aware I have this weakness; therefore, I don’t need to worry about.”
Throughout life, everyone will identify tangible flaws in their thinking, attitudes, and beliefs. However, just because a tangible misconception goes away doesn’t mean all the beliefs and, especially, the biosap that supported it go away with them. ER calls these vestigial beliefs. Vestigial beliefs help make it easy to mistake progress for cure and/or improvement for competence. When you’re trying to correct certain types of problems in your thinking and psychology, it’s almost always much easier to know when you’re making PROGRESS in correcting or lessoning the problem than it is to know whether you’ve been cured or have become competent in ways you weren’t previously. You’ve only been you, and you’ve only been one species and only one member of that species; it’s hard, if not in some cases impossible, to have perspective. Improving at something and becoming good at it aren’t necessarily the same. Similarly, it’s easy to mistake knowledge for understanding and an intellectual understanding for a good working understanding.
ER asserts that it’s wise to assume that wrong beliefs are more dangerous than correct beliefs are beneficial. Artificial causality, however, will try to convince you otherwise. ER’s focus on pragmatic preferences—or belief policies—notwithstanding, it should be assumed that the more a belief or model or reality deviates from the truth, the more the beliefs that support it will deviate from it, as well. This explains, for example, the pernicious effects that occur when one allows their aggrandizements to become delusions of grandeur. A slavish devotion to absolute truth is ill-advised, as the thinking organ has a limited ability to comprehend the world and is fundamentally designed for SURVIVAL, not philosophy; but one still needs to know the risks involved with wrongness. When adopting any belief or model of reality, you need not just look at its utility in itself, but how it works with OTHER beliefs and models, including future ones, which can only be guessed. Make no mistake: There’s lot of advantages to having right answers, but they aren’t always strictly necessary or ascertainable—and diagnosis isn’t always a prerequisite for treatment. But once again, artificial causality will try to convince you otherwise. This often results in circular reasoning that leads to wrong, or less than entirely truthful, beliefs.
On the one hand, to suggest one should never look for right answers would be tantamount to saying you should never try to solve problems, but it remains awfully naïve to think you’ll always be able or have to find them. Don’t let yourself pay a price for lack of answers. “Forced” mental health diagnoses are an example of this, and sadly, I see little evidence that anyone looks at this stuff other than at complete face value.
For these reasons, one ought to make a habit of not being wrong and stupid about the things going on around them. This task is made much easier when you realize that beliefs are much less required than your thinking organ and society have led you to believe. As I’ve said in almost every post: a design flaw of the HTO is that it doesn’t actively distinguish between WHAT it observes and how it INTERPRETS it; left to its own devices, it will blur the two together, making people prone to jump to conclusions and creating the illusion that opinions are required. Jumping to conclusions is what the HTO does BY DEFAULT. Like rationalization, it’s primarily a cognitive phenomenon; for the most part, it’s only EXACERBATED by hubris. Being particular about the opinions you choose to take on can lessen the tendency to jump conclusions, improve your ability to identify the ones you do make, and eliminate many wrong beliefs. There are more than enough things to be opinionated about; you can afford to be picky.
However, decisions are different: Decisions are mandatory; understanding isn’t always possible; and there could be serious consequences if you’re wrong. There are times in life when everyone must make a decision and they’ll be almost entirely unsure what to do; dogmas and cheap rules-of-thumb may be wonderful relative to the alternatives. These things aren’t “bad” in themselves; they get a bad rap because people misuse them—including applying them to opinions, where they have little place. To quote previous articles again, when people reach failure with the ostensible use of certain methods, their critics are too quick to blame the methods without taking sufficient look at HOW THEY WERE USED (of if they were used at all for that matter). Anything can be misused. And to remind you: HAVING AN OPINION ON THE RELEVANT ISSUES WILL PROBABLY REMAIN OPTIONAL. Put another way, you can take someone’s advice and remain skeptical about the beliefs that influenced it. Don’t let your ego get in the way.
Avoiding stupidity and wrongness is easier still if you have an epistemic POL like ER: When you put your pride and faith (not blind faith) in your beliefs ABOUT beliefs, your knowledge OF knowledge, your understanding OF understanding, and, ultimately, your WISDOM—it becomes easier to be objective about everything else.
Both decisions and observations are confused with opinions/inferences. The causation-fixated thinking organ and the collection of them known as society are effectively doing everything in their power to maintain the illusion. Predictions and prognoses are one of humanity’s most time-honored forms of social masturbation. A consequence of artificial causality is the causation bias: the natural human tendency to be way too quick to assume causality will always be ascertainable and satisfying; and the casual defects: the tendency to underestimate the number of variables involved in cause and effect, the complexity of their interactions, and the power of self-organization, or the proverbial invisible hand. Humans evolved in an environment with less variables undergoing simpler interactions and could, therefore, afford to have a more reductive view of causality. Complex systems like economies and societal trends might be GUESSABLE, but they’re way less predictable than most think—and such misconceptions illustrate the incompatibilities of the common thinking machinery with the modern World. (see the complexity article and the third part of the intro).
More on the Misconceptions About Beliefs and Holistic Management
There’s no such thing as maturity, only varying degrees of immaturity. When someone says, “I’m very mature for my age,” my response is, “There’s a better chance I would have believed you if you hadn’t said that.” Becoming non-immature has less to do with knowing more things than about knowing what you DON’T know—including areas where you need not have an opinion one way or another. One of the most important lessons there is to learn in life is that you should always be wary of thinking you understand the advantages and disadvantages of being able to view the World from a vantage point you’ve never had the luxury of possessing—including, once again, areas where you need not have an opinion either way. The need to have an opinion is where they reveal their lack of non-immaturity. If someone asked me whether I was mature for my age, my response would be, “I’ve never been that age; I’ve never been another person; and I don’t need to have an opinion one way or another.”
When I was younger, I frequently said, “I don’t take the experts especially seriously”—and I still don’t. But what I should have said is, “I don’t take PEOPLE seriously, including myself.” Reality is harder to understand than most think, and the correlation between the CAPACITY to be logical and the INCLINATION to be so in the absence of social incentives is far less than they think. ER is an anecdote to the flaws in the common thinking machinery, but it’s not a remedy--viewing it as the latter jeopardizes its efficacy as the former.
Skepticism is healthy—but that includes SELF-skepticism. Pseudo-open-minded people lack skepticism of others, and the pseudo-skeptics lack self-skepticism. The former confuses open-mindedness with apathy, the latter critical thinking with arrogance. The insistence on opinions is reinforced by the desire to impress others and reassure oneself, which, of course, are especially pronounced in younger people. Few appreciate how insidious these opinions can be, and the potential BENEFITS of the responsible emotions only complicate matters.
Examples of Failures to Understand the Nature of Beliefs and the Holism of Life
Example one: Premature goal setting.
To paraphrase Sherlock Holmes, “It’s unwise to speculate about conclusions prematurely before you have sufficient information because it can prematurely and unnecessarily bias you, making you prone to twist facts to suit explanations rather than vis versa.” A similar twisting of facts and explanations can occur if you set goals and make plans prematurely without sufficient information. People like to reinforce their beliefs, and they like to reinforce their goals and plans with their beliefs. This, in turn, can lead to distortions in people’s models of reality and impair rather than enhance their ability to make decisions, which is what goal setting is supposed to do. Among other things, the setting of goals is much more of an EFFECT of motivation than a CAUSE. Yes, a person who’s highly motivated does have a strong tendency to set specific, official goals, but they set goals because they’re motivated; very little, if any, of their motivation comes from the goals themselves. Confusion between cause and effect is a common example of confusion between correlation and causation, one of the commonest fallacies.
Example two: Sociopolitical opinions.
School, especially at the pre-university level, and the voting system treat having certain opinions as more important than knowledge and understanding—an intellectual perversion. However inclined people may be to have opinions before they have knowledge and understanding, even in some of the best of cases, it’s still not something to be encouraged. As said, nothing in life exists in an isolated universe—they happen in a large, complex, and dynamic world, which, in turn, has a large, complex, and dynamic history. Naturally, if one wants to understand large-scale societal events, it’s wise to assume that in addition to knowing the baseline facts, one should also have other prerequisites: an understanding of economics, government, the legal system, the relevant histories, etc. Encouraging people to have opinions without them fosters reductionistic, assumptive, and hubristic thinking. Debate amongst students might catalyze learning--but critical thinking is more important than knowledge, and hubris is more dangerous than ignorance.
The voting system would have you vote knowing next to nothing about what’s going on than know considerably more and say with genuine humility, “I honestly don’t know what’s in the best interest of society.” They THINK “participation” increases self-correction and catalyzes learning, much as competition between different viewpoints has in science and industry. Except in these areas, people actually TRY, rather than talking out of their asses. The overwhelming majority of mainstream society still doesn’t have an intuitive understanding of complexity theory—which means they don’t understand themselves, human beings, or any body of them in spite of the endless anthropomorphic babble. Even if people knew what was in their best interests, they often act more socially than selfishly. Research shows investments and voting correlate with social trends and affiliations as much or more than personal interests. Society has dumbed down sociopolitical issues to recruit (even more) ignorant participants; this, in turn, has done little more than dumb-down the thinking of the country’s so-called leaders while failing to make anyone’s “understanding” any better.
Example three: Gaining information and knowledge without regard to understanding and wisdom.
Knowing means knowing the facts; understanding means knowing how the facts fit in with each other; proficiency means knowing what to do with it, which often requires knowing how those facts fit in with other facts; and wisdom means knowing what all these things are and how they, in turn, fit together. Well, if you know facts but you don’t have proportional knowledge of how they fit in with each other, you don’t have proportional knowledge of what to do with them, and you don’t know what these things are in general--you have a self-inconsistent understanding of something that’s naturally prone to misusage, especially if knowing the facts gives you an artificial sense of confidence.
Pragmatic Preferences and Avoiding Wrongness
A question may remain. I keep talking about avoiding wrongness while simultaneously trying to slam home the idea that correctness is often overrated. Well, if it’s wise to believe that wrong beliefs are more dangerous than correct beliefs are beneficial, it follows that correctness is less “good” than being wrong is “bad.” Naturally, it’s right to wonder whether a belief is correct and useful, but too infrequently do people also ask, “Is there HARM in adopting this belief/ belief policy or model of reality?” Kids are often encouraged to make plans based on databases worth of missing information; people make “forced” formal and informal psychological diagnoses and the like without wondering if their wrongness could adversely affect other areas of the patients’ and doctors’ thinking and models of reality.
Fortunately, there is a middle ground, and this is where pragmatic “unwrongness” and it’s “wise to believe” come in.
Pragmatic preferences are belief policies with MINIMAL WRONGNESS that are knowingly adopted to suit pragmatic ends when the full truth is less useful or unascertainable; that is, WISE UNWRONG beliefs. Since people have more control over some things than others and since certain things tend to be underappreciated, pragmatic preferences often reflect what requires most emphasis rather than what’s ostensibly most important.
ER says it’s wise to believe that wisdom is more important than “raw intellect”—or all OTHER forms of knowledge combined. On the one hand, people have different goals and strengths and weaknesses, and the World has evolved to accommodate the rarity of wisdom—even in some ways directly or indirectly discriminate against it. On the other hand, intelligence is something you have or don’t; wisdom is underappreciated; without wisdom, your ability to USE knowledge is limited; and only rarely (Steve Bishko) does an intellectual person devalue other forms of knowledge.
ER asserts that most of the time, it’s wise to believe people are more different than similar. Everyone has the same foundational qualities, but they vary a fair amount in terms of how they manifest themselves. Which is more important, I honestly don’t know. This pragmatic preference, however, encourages extralogical thinking; the opposite belief discourages it. A person’s life is a complex system. You can’t understand it merely by looking at the parts. Just understanding the foundational attributes tells you little about how the variables fit in with each other—it may even obfuscate the fact that they VARY altogether.
In the two cases above, I might see questionable correctness, but I also see limited wrongness, and certainly none likely to be harmful. And because they’re identified as pragmatic preferences, rather than general beliefs, whatever lack of applicability they have to exceptional circumstances can easily be discerned, allowing the thinker to adjust or replace them with a different policy.
Most of the foregoing remains misunderstood because people refuse to take a holistic view of reality. They do not comprehend that nothing in life exists in an isolated universe; it all interacts in complex and dynamics ways. They call knowledge, opinions, plans, goals, and diagnoses “good” and act like they could never cause harm. Whether they’re “good” should be determined by how they fit in with everything else. This is the essence of extralogical thinking.
Comments