Extralogical Reasoning: Pragmatic Preferences
One of ER’s central axioms is that ascertaining correct answers is harder and less necessary than most think—but that wrongness is more dangerous. Extralogical reasoners avoid wrong answers by concerted efforts to minimize unnecessary and premature beliefs, while working harder on the beliefs they do have. They otherwise prefer to rely on data or utilize belief POLICIES that I’ll come to call pragmatic preferences.
Reality is simplified by what ER calls “artificial Resonance” (see third part of the intro or last post). Confusion and distractions are often almost as detrimental to an animal’s dealings with reality as understanding it is BENEFICAL. To minimize confusion, animal thinking organs reflectively simplify reality by twisting related events into causational relationships, helping fit reality into a “harmonious narrative.” In other words, humans see reality through an artificial lens of cause and effect. In artificially creating a clearer picture, however, illogical tendencies arise, such as confusion between correlation and causation and observation with interpretation. This gives rise to the causation bias: the natural tendency to be way too quick to assume that the relationship between cause and effect will be ascertainable and satisfying.
Jumping to conclusions is a consequence of the blurring of observation and interpretation inherent to artificial Resonance. Jumping to conclusions is not merely a mistake people make: It’s what the thinking organ is DESIGNED to do BY DEFAULT. This creates the illusion that having an opinion on any given issue is mandatory—when, in fact, it’s almost always optional. This and the illusory simplicity of causality are heavily reinforced by a democratic society with a free press that panders to people’s natural fixation on causality (this is not anti-democratic propaganda, just a statement of facts).
Nothing in life exists by itself: Beliefs, decisions, models of reality, resources, attributes, and pieces of knowledge, etc. all interact with themselves and each other in complex and dynamic ways. ER axiom Because beliefs are always reinforced (however imperfectly) by both the conscious and unconscious in mutually supporting packages, it should be assumed that the more a belief or model of reality deviates from the truth, the more the beliefs and models of reality that will come to support will also deviate from it, including future ones, which can only be guessed. Wrong beliefs have a way of metastasizing into other areas of your thinking, impairing your models of reality and ability to make effective decisions. Additionally, if you understand the nature of beliefs, it’s easy to infer that premature goals and plans are effectively the same thing as premature beliefs and decisions, which are biasing and easily lead to wrong answers.
When problem-solving and modeling reality, you must have a familiarity with common and individual mistakes and consider how much control you have over relevant factors. In practical matters, what requires must emphasis, therefore, may or may not be what’s ostensibly most true, important, and/or beneficial. Coupled with the fact that correctness is not of much interest to ER (though it may be to the author in theoretical inquiries), its axioms aren’t beliefs so much as belief policies called pragmatic preferences—which may or may not to entirely truthful. Many, if not most, of ER’s maxims are pragmatic preferences. They can’t be wrong beliefs for the simple reason that they aren’t beliefs at all.
However, even if I did fully believe them and failed to recognize them for what they are, I don’t think anyone would call me delusional for subscribing to them. And after reading this article, you will find that I very much do recognize and treat them for what they are. This is important because wrongness needs to be avoided; all circumstances are different; and failing to recognize and treat things for what they are impairs your ability to adjust. An analysis of the pros and cons of various ER axioms is an exercise in such recognition and treatment, including for whatever individualized ones you will construct yourself.
Wisdom is more important than raw intelligence
Problems: In addition to intelligence’s own set of obvious benefits, people have different goals and strengths and weaknesses, and the World has evolved to accommodate people’s comparative lack of wisdom—which often means discriminating against the common personality traits of the very few who have it. While I’m hardly trying to excuse it, if people don’t have certain types of problems—e.g., mental illness, eccentrics, learning disabilities—they might be able to be successful despite being totally devoid of wisdom.
Practical benefits: Wisdom, though rarer, is something you have a choice in—you have more CONTROL (I’ve talked too much about the benefits of wisdom already, but see Current Thoughts on Knowledge and Society if you want to know more). People often overestimate how guaranteed the benefits of intelligence are—a common mistake that needs to be accounted for.
People are more different than similar
Problems: Everyone has the same foundational attributes, even if they vary a great deal in how they manifest themselves; understanding these traits makes it easier to understand people. In addition, some people may feel it better connects them with others (as for myself, I feel no connection at all).
Practical benefits: People are programmed to over-rely on preconceived ideas; this policy encourages context-based thinking and decision-making—in doing so, accounting for a common mistake. It encourages question asking, and taking an interest in people’s life can gain you credibility. Finally, thinking they’re more different in some areas and using the opposite policy in another is easily achieved, so you don’t have to truly choose between them.
Hubris is more dangerous than ignorance
Problems: In addition to straightforward ignorance’s (lack of general knowledge) own problems and that some situations may require more knowledge and/or skill than judgment, hubris could be called a type of ignorance, or something at least closely related. Hubris, especially youthful hubris, isn’t so much overestimating oneself relative to others so much as overestimating human capability IN GENERAL: thinking one can do, know, or understand things that no one with their level of knowledge and experience could know or understand, if anyone period. The problem with the ignorant is not their lack of straightforward knowledge in itself; it’s that because they’ve never transitioned into being knowledgeable, they don’t comprehend the disadvantages of NOT KNOWING. A truly learned person understands that, as a general rule, it’s wise to believe they lack more important information than they have (another pragmatic preference explained shortly). But is not knowing this hubris, or is it ignorance?
The sentences in the above paragraph BY THEMSELVES may make sense, but the paragraph doesn’t have a clear message other than that there is an important but ambiguous relationship between hubris and ignorance. Does it matter? No.
Practical benefits: In life, the majority of the time, I’d rather rely on someone who’s just smart and knowledgeable enough but someone who knows their limitations and is good at avoiding mistakes than someone who’s much smarter and more knowledge that’s cocky and arrogant. Judgment is largely the ability to deal with a LACK of knowledge; failing to take that into account is a blatant mismodeling of the situation. The more common failure is overestimation of information, not underestimation, which is rare. In addition, lack of GENERAL knowledge is not something cured in any short amount of time; you probably can’t rely on it being of much help in the situation you’re currently dealing with.
No matter how good your resources and nominal plans, objectives, intentions, etc. there’s no such thing as a guaranteed benefit
Problems: Naturally, if the right things are in place, many things might be guaranteed--but it’s sentences like this that make IF one of the best words in the English language. Some people could argue that this policy might decrease confidence. More on this below.
Practical benefits: Believing in guarantees so easily leads to delinquency, negligence, indolence, etc. that even if there’s a tiny chance you’re wrong, it’s not worth it. And if it’s so guaranteed, you should have nothing to worry about. If poorly pursed, dependency and reliance can be disempowering, especially if it’s on other things and people. Of course, at various points in your life, you will have to depend on resources and other people, but you must be careful not to hinder your own agency.
Some things are falsely believed to be guaranteed because they don’t understand the thing. Knowledge may be a requirement for understanding, proficiency, and wisdom, but these things don’t follow from it as readily as people think because they don’t know what any of these things are. Just because A always proceeds B doesn’t necessarily mean it automatically follows from it (what I call the precursor fallacy) (see my article current thoughts on knowledge and society for explanation on what knowledge is and why people don’t understand them). One major problem is that people don’t appreciate how great of a role APPRECIATION plays in someone’s ability to benefit from something like learning. You don’t need to appreciate a topic to get in an “A” in the class, but whether it affects other areas of your thinking and whether you remember after you learn it, appreciation plays a greater role. Something tends to be a more effective means to an end when it’s also an end in itself. A guarantee depends on certain things being in place, but if you don’t know what something is, you won’t know what needs to be in place.
With regards to confidence, anyone who understands it won’t take that argument seriously. Guarantees are for conmen and juveniles.
There’s no such thing as a cure for ANY mental, attitudinal, or epistemic problems, only dormancy
Problems: Obviously, one could argue that some things can be “cured.” An entitled, spoiled fourteen-year-old could, perhaps, become “un-spoiled” by the time he reaches adulthood. When I was in my late teens, I often accused people of “over-reacting” to my ridiculous behavior. Having adopted the pragmatic preference that a person is not capable of over-reacting to a mistake I make, I haven’t said that in almost a quarter century. Some might say “I’m cured.” But my other pragmatic preference says no.
Practical benefits: One, to some extent, a cure means a guarantee. The greatest concern are the pernicious effects of vestigial beliefs. Beliefs come in packages, and that includes bad instincts, bad attitudes, misconceptions, etc. Most of these are subtle; there in the unconscious beliefs. Just because an articulable belief goes away doesn’t necessarily mean all the beliefs that supported it go away with it; these can linger for quite some time. These are vestigial beliefs. Vestigial beliefs make it easy to mistake progress for cure and/or improvement for competence. Similarly, it’s easy to mistake an intellectual understanding of something for a good working understanding and knowledge of a subject with understanding. All these things curb rehabilitation and learning and cultivate hubris.
And I don’t see any practical cost to simply thinking a problem is dormant.
There no such thing as an “all-applicable” piece of knowledge (applicability to every situation that it’s HYPOTHETICALLY applicable to) & every piece of knowledge has more misapplication than application.
Problems: Like is the case of guarantees, whether something is applicable depends on what’s in place, what you know for sure. Some may say some pieces of knowledge are underappreciated, and this pragmatic preference cultivates that.
Practical benefits: People over-rely on preconceived ideas and their favorite pieces of knowledge, or “pet explanations.” This might not seem as much like the case because pieces of knowledge that SHOULD be appreciated aren’t. Many people “learn” important things in school they don’t use. But exposure to the lessons doesn’t guarantee they’re learned, appreciated, or remembered. Beliefs unsupported by emotions and other beliefs don’t tend to have much effect on their decisions or general perspectives.
As a general rule (and not necessarily in particular instances), you should assume you lack more relevant information than you have
Problems: First, there’s the factual problem: Clearly, in some cases, this may not be true. Although I would argue that it should encourage you to get more information, some would argue that it might discourage you from gaining information and exploiting the information you do have. Underestimation of information and abilities are perfectly possible mistakes, even if far less common.
Practical benefits: The availability heuristic bias is the natural tendency to overestimate the significance of the information you do have about something and even more underestimate the significance of the information you don’t have. In other words, people are quick to mistake a piece of knowledge for the whole picture. This is especially problematic because knowing the wrong combination of important pieces of information about something can be highly misleading. Having a complete thought process isn’t just about how well you analyze the information you have, but whether you analyze all the information you NEED. You could spend ten years brilliantly analyzing the information you have, but if it’s only seventy-five percent of the information you need, it could easily lead to wrong answers. Since overestimation is the more common problem, efforts need be made to compensate for it.
There’s no such thing as commonsense
Problems: This is ER’s most controversial pragmatic preference. Even if I succeed in justifying it, it’s implications might outweigh the benefits, which could oblige me to discard it.
Notwithstanding the ambiguities and impurities of commonsense, people’s genetics play a significant role in how they reason, and many people have (or are supposed to have) these traits in common. Although it would be preposterous to say they’re infallible, it would also be foolish to ignore your gut feelings and intuition, especially when making decisions. Even when forming opinions, you can’t make optimal use of your intuition if you completely discard your feelings.
It’s not clear if or to what extent common “wisdom” and “knowledge” are included in commonsense—which is a problem--but despite their many lamentable attributes, they’re not necessarily worthless, either. As I said in the first part of the ER intro, the problem with common wisdom is not so much WHAT it says so much as HOW it’s said and what it DOESN’T say and, therefore, what it IMPLIES. The very fact that it’s so far removed from a systematic, context-based way of thinking and decision-making at best implies these things don’t matter and at worst that they’re a mistake. But again, they’re not all entirely wrong, and while much of common knowledge are misleading factoids, they certainly aren’t always false, either.
Moreover, the very fact that a belief caught on has to mean SOMETHING. It may not mean what people think, as beliefs have many attractive properties, but it still gives you SOME useful information. For example, the fact that the proverb “You can do anything you put your mind to” became popular means humans are silly, sentimental children—but this is useful information! If you’re fortunate enough to have learned this, you likely learned it FROM this proverb.
Practical benefits: As I’ve touched upon above, there’s a great deal of ambiguities and impurities inherent to commonsense. People build their primary models of reality and many of their primary intuitions at a very early age, when they’re far too young to have any substantial logical reasoning skills; and the resultant flaws impair their thinking forever after. The influence of one’s environment goes well beyond attempts to brainwash you into believe certain things (in fact, sometimes it results in someone believing exactly the opposite). These influences include the laymen’s beliefs on science, history, etc., which tend to be wrong or misleading. But the fact that they often aren’t entirely wrong or illogical can make them more pernicious: Accepting a statement that’s factually true and important but that carries multiple false implications can impair your understanding more than enhance it.
You can see the profound effects of environmental influences if you study what philosophers call explanatory relativism. Changes in scientific theories don’t just result in paradigm shifts in beliefs, but the very criteria for what constitutes a logical explanation. Most people would be quite surprised at what would pass for an explanation in years past, even amongst the best scholars.
Artificial Resonance creates all kinds of problems with people’s intuitions. Resonance creates an artificial sense of causality as well as RATIONALITY, for BELIEF in rationality, like understanding, enhances confidence and reduces confusion. The human thinking organ rarely if ever has a way to prove right answers; it only has a SIMULATION of it: CONVINCING itself. Given the convincing is being executed by the very thing that proposed the idea, this puts the thinking organ in a perpetual conflict of interest. The human thinking organ isn’t designed to make decisions rationally so much as to CONVINCE ITSELF it makes decisions rationality. Most people falsely believe their innate intuition is something akin to the common thinking machinery used to understand the World as it is; in reality, it’s much closer to the common thinking machinery used to make SENSE OF a much simpler environment for SURVIVAL.
Despite the costs, artificial Resonance and other components of mammalian thinking machinery have been wildly successful, and trying to use the thinking organ in a way entirely contrary to its nature would put it in yet another inescapable conflict of interest. But the thinking organ remains far from ideal for the modern World and philosophy. In the end, overestimation of one’s intuition is far more common than underestimation. And hubris is more dangerous than ignorance.
As I hoped I’ve shown, issues concerning control and common mistakes often outweigh truth. I also hope I’ve shown that I recognize and treat pragmatic preferences for what they are: useful belief policies that aren’t prone to be treated as immutably correct or applicable. Suggested pragmatic preferences are wanted. I’m hoping to find more specific and practical examples from those with a more pragmatic bent and more “real life” experience.
Comments