Why Judging Talent is Harder and less Necessary Than Most Think (and in some cases, may even be harmful)
To paraphrase many who came before me, “I know a hundred times as much as I did when I was young, and now I THINK I know one one-hundredth as much.”
Oftentimes, the most fervent believers are those who once believed the opposite. When I was younger, I thought I was talented at many things, including JUDGING talent. I now believe neither. Among the many misbeliefs and delusions I’ve reconciled over the years, they are among the most influential.
Judgment and competence are often related but not the same. Judgment is the ability to make decisions with limited information, especially in areas where one is not trained or experienced. Competence usually pertains to areas where one's understanding and proficiency have been given ample opportunity for unambiguous corrective feedback. For reasons expounded at length in other posts (especially in part three of the ER intro), ER has a low opinion on the reliability of human judgment. It teaches that (it's wise to believe) that any good judgment a person possesses comes from an expertise in why their judgment is prone to be bad and reengineering their thinking accordingly.
Thus, the reliability of a field and its members’ beliefs are based on the subject’s self-correctivity—the ability to correct its own mistakes. This, in turn, is based on how accurately beliefs can be tested and whether the field has sufficient social incentives--such as supervision from authorities and institutions, competition, professional responsibilities, and monetary rewards. The lack of opportunity for rigorous testing resulting from unknown and unidentifiable variables/factors undergoing complex interactions in most areas is problematic enough without having to deal with the universal flaws in human thinking, which are only peripherally related to intelligence.
Self-correctivity in science and engineering, however, is unusually high, and this has allowed them to become humanity’s most successful fields. The rigors of scientific testing don’t just make it possible to prove ideas correct, but to disprove wrong ones. They triumphed in their pursuit of truth by exposing falsity and adjusting accordingly (often despite great individual opposition). As I've said, don’t let the survivors-biased nature of popular history fool you: Science rose to glory on the backs of its members’ failures far more than by standing on the shoulders of giants. Resultantly, Aero-astro engineering can put unmanned rockets into orbit around Mars; quantum mechanics can predict the magnetic moment of an electron to one part in a trillion; cell phones and blue-tooth allow people to talk to someone half-way around the world while jogging; and automobile manufacturers pump out dozens of cars day after day.
The “easier” social sciences are much less self-corrective, and humanity has had much less success at treating mental illness and predicting wars and the economy, as a result. Psychologist Phillip Tetlock, for example, gathered predictions from one hundred political pundits over a twenty year period (1984-2004) and found the pundits were not only outperformed by the simplest statistical methods, but barely did better than random guesses.
The social sciences and pedestrian endeavors with less self-correctivity are only “easier” because of the relative lack of rigorous testing, which readily allows for an exaggerated sense of understanding. Putting it crudely, there’s more room for bullshit. Extralogical reasoning can’t remedy the disadvantages of low self-correctivity, but it is a strong anecdote. Essentially, this is what ER is: an epistemic system designed to compensate for the disadvantages/potential disadvantages of a lack of social and intellectual supervision and correction (which everything does at least to some degree). Unfortunately, the correlation between the CAPACITY to be logical and the INCLINATION to be so in the absence of incentives, such as those mentioned above, are far less than most think. Thus, when self-correctivity approaches one hundred percent, the difference between a smart and dumb person approaches infinity; when self-correctivity approaches zero, so too does the difference between smart and dumb.
Evaluating talent, especially in areas outside of one’s specialty, is not an area with high self-correctivity. JUDGING talent is, thus, a matter of JUDGMENT, much less one of competence. One should never underestimate how much tinkering an idea, ability, or understanding (of a topic) may need to undergo before it can reach fruition, and anyone who lacks an appreciation for such tinkering should never be trusted—for Hubris is more dangerous than ignorance (or at least it’s wise to believe so).
If one doesn’t sufficiently manage their thinking, exposure to a field or subject without direct participation—and, thus, adequate correction--can easily lead to an artificial sense of confidence, causing the additional KNOWLEDGE to result in a lesser UNDERSTANDING. In other words, additional exposure without a corresponding increase in correction is potentially dangerous. This is especially applicable to assessments of talent since people almost always have access to at least some useful information.
The availability heuristic bias is the natural human tendency to overestimate the significance of the information one has about something and even more underestimate the significance of the information they don’t. People readily mistake a piece of knowledge for the whole picture, and the tendency to do so bares little relationship with intelligence. This is due to the tendency to jump to conclusions—which the human thinking organ does BY DEFAULT--as well as people’s vulnerability to hubris. However, as ER often says, since hubris isn’t so much overestimating oneself relative to other people so much as overestimating human capability in general, a person prone to overestimate themselves are often, likewise, prone to overestimate others and vis versa.
The worst way to avoid a trap is to think you’re impervious to it. Thus, in matters of judgement, it’s almost always better to rely on someone who’s just smart and knowledgable enough but who knows their limitations and is good at avoiding mistakes than someone who’s much smarter and knowledgeable but cocky and arrogant and tends to overestimate themselves.
And in the case of judging talent, the dangers of the availability heuristic bias are made all the worse because the differences between good and great are often very nuanced—and parity increases along with ability. Sometimes, the differences are so nuanced the EXPERTS don’t fully know why certain participants are more successful than others.
Included in things that are often unaccounted for, especially by non-experts, are what I call potential weak-links: areas where one need not be strong, but if they’re weak, it’s a serious problem. After all, succeeding at something isn’t just about what you do well, but what you don’t do poorly. This is doubly problematic when assessments are made by hubristic thinkers, who tend to overly focus on strong attributes.
Matters are complicated further by what the famous cognitive psychologist Daniel Kahnemen calls competition neglect. People are too quick to think when something is good, it should be successful. Year after year, for example, high-budget films are released at peak times along with high-budget competitors—to their ruin. When asked the reason for this repeated mistake, an executive answered, “Hubris. Hubris….” Similarly, many people would be quick to think they could predict the success of a song, album, or band. A 2004 study took dozens of unpublished songs and had hundreds of teenagers and adults rate them to see how predictable success really was (see Everything is Obvious by Duncan J. Watts). They found that while the highest rated songs weren't random, they weren't terribly predictable, either. Some of the same songs had frequent success, but degree of success varied markedly. Moreover, the participants who could see the ratings of other participants gave very different results than those who didn't, with the adults being just as influenced as the teens.
In addition to overconfidence and the availability heuristic themselves, there are several reasons (or potential reasons) for competition neglect:
One, the underestimation of the role of luck due to optimism, the availability heuristic, and the causation bias—the natural tendency to be too quick to assume the relationship between cause and effect will always be ascertainable and satisfying. Two, optimism itself. Three, the above-average effect: People are too quick to assume they and those they are partial toward are better than average at various things, general and specific. Sometimes, people answer a harder question with an easier one such as “are you/they above average” with “are you good.” The above-average effect might also be attributable to the negativity bias: the tendency to be more likely to remember negative cases/examples of something.
Ultimately, the above leads to what I call the guess-ability trap: the more guessable something is, the easier it gets to overestimate how PREDICTABLE it is.
Indicators of talent come in two forms: performance data--comprised of what I’ll call standard performance data, supplemental performance data, and cross-performance data—and what I’ll call extra-data observations.
Performance data is simply one’s success at an activity relative to the effort put in at that point in time—e.g., how good an athlete is at his sport considering his age and experience level. Cross-performance data is data from similar activities—the assessment of the potential of a novice at one sport, for instance, should consider success at other sports. Supplemental performance data is data that measures specific abilities: IQ tests, SAT scores, how fast someone can run, how much weight they can lift, etc.
But a person who prides themselves in judging talent should know that the true definition of natural ability is not how good someone is when they first start, but their ultimate potential if they put in a full effort. Oftentimes, the most talented are far from instant successes, for talent can be dormant, making the term “a natural” a questionable description for one who’s talented. Those who claim to be good at recognizing talent put heavy stock into extra-data: indicators outside performance data, especially when there’s a comparative lack of success in the data or shortage of performance data itself. These symptoms are found in the physics student who asks good questions and invents ways of solving problems despite comparatively poor performance on tests; the physically weak first-year wrestler who gets muscled around by other beginners but has good agility and good instincts that could allow him to eventually surpass them; or the guy who demonstrates really good logical reasoning skills in conversation who could be talented at math or science. Those who appreciate such indicators know that the most successful at an activity are often distinguished not by skill at standard execution, but in innovation, improvisation, and creativity--things not as easily taught, if at all.
Such talents, however, may not correlate with standard natural abilities (e.g., raw intelligence or athleticism) as much as some think. Passion for an activity is another indicator of talent, especially the relevant creativity, but it’s not necessarily the manifestation of talent itself. Just as you can have the symptoms of a disease without the disease itself, you can have symptoms of talent without talent (ER calls this the fallacy of indications). Twenty people could have multiple symptoms of a property for every one that has it. While innovation and creativity may separate the good from the great, if one can’t become at least mediocre at standard execution, the latter two may not matter much. For example, for a mathematical physicist, Einstein notoriously struggled with math, but he was still good enough to become one in the first place; he still had a PhD from a prestigious school and discovered general relativity. If a person can’t at least get an undergrad in physics, all the creativity in the World will be of little consequence (at least not in the modern era). In other words, a predilection for physics may gift you with an ability to ask good questions about the physical universe, but it matters little if you aren't smart enough to answer some of them.
Moreover, however real dormant talent may be in the GENERAL sense, in individual cases, such hypotheses are non-falsifiable, meaning they can only be proven right and never be proven wrong. It could be claimed that any person who’s done badly at an activity, and however many similar ones, has dormant talent and you can never be disproven. You can be proven right but never wrong. And a theory that can’t even in principle be disproven should only be allowed to carry so much weight—especially if it becomes the justification for doing the same thing over and over again without getting what you want (the standard definition of insanity).
Both performance data and extra data indictors should be taken seriously. However, it’s been my experience that people tend to rely too heavily on one or the other, but few adequately embrace both while prudently suspending judgment. The more pride one takes in judging talent, the more they tend to be biased toward extra data indictors, but in doing so, they expose themselves to the disadvantages and epistemic traps of low supervision and correction. People love making predictions and prognoses, and as I’ve said in other posts, they have a confirmation/survivor’s bias: The one’s that prove true are more likely to be remembered. And those that don’t are often easily explained away.
The total rejection of the “wisdom” of the “experts” in favor of straight statistics resulted in one of baseball’s greatest recruiting efforts, allowing the Oakland A’s to soar trough the MLB ranks with one of its cheapest lineups. In his book Moneyball, based on Billy Bean’s management and related research, Micheal Lewis reported on how easily recruiters were deceived by athletes’ appearances, laymen’s beliefs, superstition, and simple fallacious reasoning. As someone who’s faked talent at many things, I can tell you that appearance and various forms of posturing can fool a lot of people.
Evaluations of talent are less necessary and more speculative than many think. As ER drills into the heads of its readers, a design flaw of the human thinking organ is that it doesn’t actively distinguish between WHAT it observes and how it INTERPRETS it; left to its own devises, without a thinker’s deliberate intervention, it will blur observation and interpretation together, making people prone to jump to conclusions and creating the illusion that opinions are always mandatory.
Such prognoses can even be harmful. People set goals based on beliefs. To paraphrase Sherlock Holmes, “it’s unwise to speculate about conclusions before you have sufficient information because it can prematurely and unnecessarily bias you, making you prone to twist facts to suit explanations rather vis versa.” A similar twisting of facts and explanations can occur if people set goals prematurely. People like to reinforce their beliefs, and they like to reinforce their goals with their beliefs. Setting goals prematurely can lead to distortions in people’s models of reality and impair their ability to make decisions—the opposite of what goal-setting is supposed to do.
ER only recommends setting goals when they affect decisions. And setting goals doesn’t motivate people as much as usually believed. The setting of specific goals is much more of an EFFECT of motivation than a CAUSE. In other words, someone who’s legitimately motivated does have a tendency to set specific, official goals, but they set goals mostly BECAUSE they’re motivated; little if any of their motivation comes from the goals themselves. Confusion between cause and effect is a common example of confusion between correlation and causation, one of the commonest fallacies. Furthermore, encouraging people to set unnecessary goals can condition them to believe that goal-setting is a social and psychological activity designed to show-off and gratify oneself.
Comments