/r/criticalthinking

Photograph via snooOG

Rule 1. Research Philosophy StackExchange with best endeavors, before posting something newfangled. Don't repeat, or post anything Googleable.

2. Answer with reliable sources.

No UNsourced opinions, deprecated, potentially unreliable, questionable sources. Cite Wikipedia merely as a last resort. Don't play Devil's Advocate.

3. Professionalism and Uberrima fides.

Comply with Grice's Maxims. Behave mannerly, regardfully. Write eloquently — no infelicities, profanity, solecisms, un-grammaticalities, or typos!

4. Questions shall be substantially interesting and publicly important.

BoilerPlate clauses

Moderators have absolute sole discretion for anything. By submitting anything, you accept all rules.

Posting minimums = 1 month, 100 post karma, 100 comment karma, 200 combined karma.

/r/criticalthinking

8,122 Subscribers

23

Can this be identified as a type of argument fallacy?

Person 1: “Racism is a problem, so we should take these actions to assist people of color.” Person 2: “taking these actions or even speaking like this focuses on their race, which is racist.” I’m thinking of a common conservative (person2) argument against things like affirmative action and teaching critical race theory. Also responding to “BLM” with “ALM” and claiming that to be less racist. Seems like they’re taking the argument and flipping it back onto the other person but in an equivocal way? EDIT: could this be an inverse of the Pink Elephant Paradox?

31 Comments
2022/02/22
21:10 UTC

32

What Is a Sound Argument?

Have you ever wanted to disagree with someone’s argument, but you couldn’t find any flaw in it? It’s possible you were facing a sound argument.

An argument is a series of statements that try to prove a point. The statement that the arguer tries to prove is called the conclusion. The statements that try to prove the conclusion are called premises.

Here’s a sample argument:

Premise 1: If it is raining, then the street is wet.

Premise 2: It is raining.

Conclusion: Therefore, the street is wet.

Above is an example of a series of statements that counts as an argument since it has a premise and conclusion. That’s all it takes for something to be an argument: it needs to have a premise and a conclusion.

A sound argument proves the arguer’s point by providing decisive evidence for the truth of their conclusion.

A sound argument has two features:

  1. The argument has a valid form, and
  2. All the premises are true.

I’m going to talk about these points in order. To understand the valid form, we need to understand the logical form of an argument and the logical form of a statement.

What Is an Argument’s Logical Form?

When we say an argument is valid, we are talking about an argument’s logical form. I wrote about valid arguments and logical forms in this piece here in detail. https://thinkbuthow.com/valid-argument/

Logical forms are like math formulas. Each comprises variables and operators. For example, the math formula “x + x = 2x” comprises a variable ‘x’ and an operator ‘+’. If we were to plug in the value 1 for x, then we would get “1+1 = 2.” Logical forms are similar. The difference is that instead of mathematical operators, logical forms use logical operators, and instead of variables that are filled in with numbers, the variables of logical forms are filled in with statements.

How do you get at the form of an argument? An argument is a series of statements, so to get at the form of an argument, you need to get at the form of the statements that compose it.

The Logical Form of a Statement

Here are a couple of examples of statements: “It is raining.”; “The street is wet.”

Statements can be combined using logical operators such as the following:

  • Not
  • Both… and…
  • Either… or…
  • If… then…
  • … if and only if…

When we combine two or more statements using logical operators, the result is a compound statement.

For example, the statements, “It is raining,” and, “The street is wet,” can be combined by the logical operator ‘and’ to make a compound statement as follows: “It is raining, and the street is wet.” Or they can be combined using ‘if…then…’ as follows: “If it is raining, then the street is wet.”

Here are more examples of statements formed with logical operators: “It is not raining,” “James is tall, or Adam is fast,” “Either you can go straight, or you can make a right,” “Shawn can win the race if and only if he enters it.”

Now that we understand the logical form of a statement, let’s talk about the logical form of an argument. An argument is composed of statements. The premises and the conclusion of an argument are all statements. So if you want to know the logical form of an argument, you start by identifying the logical form of the statements composing it.

Here’s an example of an argument:

Premise 1: All mammals are animals.

Premise 2: All dogs are mammals.

Conclusion: Therefore, all dogs are animals.

Here’s the form of the argument:

All M are A

All D are M

Therefore, all D are A

Logicians have a name for this form of argument. It is a valid deductive argument called a categorical syllogism.

Now, an argument’s form is valid if and only if the truth of the argument’s premises guarantees the truth of its conclusion. If we plug in true premises, in other words, a valid form guarantees a true conclusion.

A valid form is similar to an accurate math formula. For example, in mathematics, if you want to get the area of a circle, you will first get the formula to calculate the area of a circle. In this case, the formula will be “A = π (r)^2.” At this point, all you need to do is plug in the radius r of the circle in the formula to get an accurate result. If you get the accurate radius, then you are guaranteed an accurate area.

The categorical syllogism is a valid form because if the two premises are true then the conclusion has to be true. In other words, if premises 1 and 2 are true, then the conclusion (All dogs are animals) has to be true–it’s impossible for it to be false.

Now that we’ve talked about forms of statements and arguments, let’s talk about what it means for an argument to be a sound argument.

What makes a valid argument into a sound argument?

Now that we understand what a valid argument is, it is easier to understand a sound argument. An argument is sound if and only if it is a valid argument and all the premises are true. Examples of sound arguments include categorical syllogisms whose premises are all true.

In order to determine whether an argument is sound, you need to ask the following two questions.

1. Does this argument have a valid form?

2. Are all the premises true?

Once the answer to both 1 and 2 is yes, then you know it’s a sound argument.

The following argument is another example of categorical syllogism:

Premise 1: All men are mortal.

Premise 2: Socrates is a man.

Conclusion: Therefore, Socrates is mortal.

Let’s look at the above example with two questions in mind to determine whether this argument is sound.

Does this argument has a valid form? Yes. The above form is called a categorical syllogism, and it is a valid form. Logicians have compiled a list of time-tested valid argument forms such as Modus ponens, Modus tollens, and Disjunctive syllogism. Categorical syllogism is one of the most popular forms, and it is a valid form because if the two premises are true then the conclusion has to be true.

Are all the premises true? Yes. Both of the premises above are true. Premises are statements. Statements can be either true or false. A statement is true when the world matches the statement. If I were to say, “2 plus 2 is 4,” then this statement is true since it matches how the world is. If I were to say, “2 plus 2 is 5,” then this statement is false since it doesn’t match how the world is.

If you can’t determine whether the premises are true or false, you can choose to withhold judgment. Withholding judgment means you don’t make a decision to accept or reject a claim. For example, suppose you don’t have decisive evidence for or against this claim: “There is life outside of the earth.” You don’t have to make a decision about whether or not the claim is true. You can withhold your judgment till you get more evidence for or against the claim.

If the answer to questions 1 and 2 is yes, then you know that the above argument is sound. You know that the argument actually proves its point. It actually proves that the conclusion is true.

However, if the answer to question 1 is yes, and you’re withholding judgment about question 2, then at least you know that the argument is a valid argument even if you don’t know whether the argument is sound.

You can read the entire post here- https://thinkbuthow.com/sound-argument/

5 Comments
2022/02/11
19:35 UTC

14

“Electric cars do more harm than good to our climate”, “Vaccines cause autism”, “5G is deadly to humans" - Try our free AI powered fact-checking tool!

Tired of your uncle making up claims during family dinner? Fact-check claims faster using AI!

At Factiverse we use AI, ML, and NLP to help researchers and journalists find the most reliable sources. We have just launched our demo, which gives you the option to check any claim or to copy your own text and check all the claims of it.

The AI is built from 12 years of research at University of Stavanger in Norway. It's trained on global fact-checking articles to identify traits and signs of credibility. We scan the entire web (not just google) to find the most credible sources.

In contrast to other fact-checkers, we do not want to tell you what’s true or not - because if we want to combat the spread of fake news we need to become better at identifying it and assessing sources on our own. We do believe AI and tech can make this a faster process, and give you a faster overview of a given subject, topic or claim.

We are at an early stage but if you want to have a look and test our demo, you can find it here:

https://factiverse.github.io/ai-editor/

To use it:

  1. Select a claim or type your own to get an overview of the sources disputing, supporting or conflicting it.
  2. Copy your own text and easily fact-check claims to see how balanced your story is.

Our goal is to make it faster and easier for people to understand the information around given topics - how much is disputed? How much research is done on the subject? What are the most reliable sources on both sides of the claim?

What do you think? Is this a tool that could help promote critical thinking? We want to build more interactiveness so that you would get prompts about thinking twice, checking what sources there are etc..

(Hope this is fine to post here, let me know if not and I'll delete it).

11 Comments
2022/01/28
11:50 UTC

6

What logical fallacy is this?

If being cool were illegal, I'd be a criminal. Not because I'm cool, but because I shot my wife.

  • repost by u/jezzer420 on r/twittercringe.

Is this a vacuous truth, for example? I think it's not, but that's the closest fallacy I can find. Or maybe it's a non sequitur?

10 Comments
2022/01/27
18:18 UTC

52

What Is Confirmation Bias?

Confirmation bias is a tendency to consider information that confirms what you already believe and that doesn’t challenge it. The American Psychological Association defines it this way:

“the tendency to gather evidence that confirms preexisting expectations, typically by emphasizing or pursuing supporting evidence while dismissing or failing to seek contradictory evidence.”

Numerous experimental studies in social psychology have shown confirmation bias to be a common psychological bias, like attribution bias or the Dunning-Kruger effect.

Here’s an example. Suppose you hear about inflation in the US economy. You notice that there are two positions about it:

  1. Inflation is transitory due to the COVID pandemic;
  2. Inflation is already here and ramping.

Suppose that you are going to believe one of the two claims. If you are like most people, you have a tendency to favor information that confirms what you already believe and that doesn’t challenge it.

One option is to seek pieces of information that confirm that inflation is transitory. You go to your browser and type, “Is inflation transitory due to the COVID pandemic?” and then you go on to read many sources that agree that inflation is transitory due to the COVID pandemic. You continue seeking more confirmation that inflation is transitory from your family, friends, and social network. And you avoid any sources that challenge this view.

Another option is to seek pieces of information that confirm that inflation is already here. You go to your browser and type, “Is inflation already here and ramping?” and then you go on to read every source that agrees that inflation is already here and ramping. You continue seeking more confirmation that inflation is already here from your family, friends, and social media. And you avoid any sources that challenge this view.

In both of the above cases, you are only seeking sources that confirm what you already believe. You don’t want to challenge your own belief about inflation. The tendency to look at only the sources that confirm your own beliefs is an example of confirmation bias.

Confirmation bias is often what drives people to accept even the most absurd conspiracy theories. For example, my friend Josh believes that the species Bigfoot exists. Bigfoot is supposed to be a kind of animal living in the woods that looks half-human and half-chimp. Josh’s reason for believing that Bigfoot exists is based on two anecdotes. One night his friend, Steve, was driving around the woods at 2 am, and Steve saw something that he believed was Bigfoot. Later when Josh and Steve went to ask the local Native Americans, the locals forbade them from saying the word ‘Bigfoot’ because, they said, people who utter the word ‘Bigfoot’ have Bigfoot appear in their lives. After those two events, Josh declared that Bigfoot is real.

My friend Josh didn’t consider any information that would counter the belief that Bigfoot exists. For example, Josh never considered that we’ve never found remains of Bigfoot even though we’ve found remains of dinosaurs from millions of years ago. In addition, if it were true that uttering the name ‘Bigfoot’ were to bring Bigfoot in your life, then Josh and Steve could have sat in the woods with their guns and uttered the word ‘Bigfoot’ over and over to prompt Bigfoot to appear. Both of these considerations would have countered Josh’s belief that Bigfoot exists. But Josh wanted to believe that Bigfoot is real, and he was focused only on sources of information that supported his belief. Josh was biased.

Something similar is true of people who believe that the earth is flat, that the moon landing was staged, or that the holocaust didn’t happen. These people step into an echo chamber to confirm their existing beliefs: when they say, “The earth is flat,” the sources they’ve surrounded themselves with echo back, “The earth is flat,” to further confirm what they believe. And the echo chamber never presents them with any sources that disconfirm their beliefs. In other words, people who endorse these views don’t want to see the world for what it is; they only want to see the world for what they want it to be.

Let’s look at another scenario. Let’s say you are looking to invest in a stock. You think that Amazon is a great investment, so you go online to find evidence to support your belief. You type into YouTube, “Amazon stock analysis,” and you watch many videos that echo back what you already believe about Amazon. YouTube is programmed to recognize viewer preferences. Now YouTube starts recommending more and more videos that further convince you that Amazon stock is going to soar. And later YouTube filters out any videos that are going to disconfirm your belief that Amazon stock is a great investment.

It takes discipline and intellectual curiosity to find opposing viewpoints that challenge what you already believe instead of just confirming them. If you truly wanted to see Amazon stock for what it is, you would need to collect some disconfirming evidence and evaluate that evidence.

How to Detect Confirmation Bias

There’s an easy way to determine whether someone is manifesting confirmation bias. Ask them, “What are the arguments against your claim?” If they can’t answer the question, then they’re biased. For example, when I asked Josh for arguments against Bigfoot’s existence, it was clear to me that he had not sought any counterarguments.

You can ask yourself the question, “What are the arguments against my claim?” to detect the bias in yourself. Suppose you were looking to invest in Amazon stock. Ask yourself, “What are the arguments against buying Amazon stock?” If you can’t answer the question, then you’re biased.

How to Disarm Confirmation Bias

To counteract confirmation bias, take these two steps.

Step 1: Withhold Judgment

Withholding judgment is when you don’t make a decision about accepting or rejecting a claim. For good decision-making, you need to understand arguments both for and against a claim. You don’t make a judgment until you have looked at both sides of the claim.

By withholding judgment, you are in a position to see the claim from all angles and strengthen your commitment to knowing and understanding what’s true.

Step 2: Look for Evidence against the Claim Not Just for It.

“I’m not entitled to have an opinion on this subject unless I can state the arguments against my position better than the people do who are supporting it.” –Charlie Munger

This quote from Munger brings up a key strategy for disarming confirmation bias: you need to understand the argument against your position to avoid confirmation bias.

When you endorse a claim, you need to understand the arguments against it. For example, if I believe that Bigfoot exists, then I need to explore the arguments against my belief. I can start by asking some questions like these: If Bigfoot exists, then where are the Bigfoot remains? How come so many people utter the word ‘Bigfoot’ but he doesn’t appear? Answering these questions can get me thinking about possible arguments against Bigfoot’s existence.

Disciplining yourself in the way Munger describes is a way to disarm confirmation bias. When you believe a claim, you need to look for contradictory evidence–i.e. evidence that disconfirms your pre-existing belief, not just evidence that confirms it. The impact of confirmation bias on your day-to-day life are numerous because confirmation bias affects cognition and how you process information. Among other things, focusing only on confirmatory evidence can lead to overconfidence in the accuracy of your beliefs. It can also distort your memories: even when people explore evidence that contradicts their beliefs, they tend only to recall information that confirms their beliefs. In addition, when people experience cognitive dissonance (that is, when they encounter evidence that contradicts their pre-existing beliefs), they tend to dismiss that evidence in favor of evidence that confirms those beliefs.

A freethinker is committed to knowing and understanding what’s true, and that involves managing cognitive biases because the latter distort our knowledge and understanding.

Full post here: https://thinkbuthow.com/confirmation-bias/

9 Comments
2022/01/03
18:58 UTC

31

The Real Reason Saying “I Don’t Know” is Hard

As a kid, I had a reputation for being a math whiz. Whatever problem the teacher put on the blackboard, I already knew how to solve it. I aced Geometry, Algebra 1, Algebra 2, and Trigonometry; math just seemed to come naturally to me.

Then, I hit Precalculus.

For the first time, I was getting many things wrong. But by then, I had come to accept the self-image of being a math whiz. I couldn’t imagine being someone who didn’t understand math. So, I began pretending that I knew what I was doing.

But you can’t do that in math. If you don’t get the right answer, you’re just wrong. That’s why math is humbling.

My lack of understanding showed up in my midterm grades, and I ended up barely passing Precalculus. Afterward, I was still pretending to know. I blamed teachers, my school workload, and my part-time job for my struggles. I refused to accept that I didn’t understand advanced math concepts.

I now know what was wrong. I was afraid to say three dreaded words: “I don’t know.”

Why don’t we say “I don’t know”?

I’m not the only one who has a hard time admitting that I don’t know. There are many people who don’t admit to what they don’t know. Why? What’s so scary about it?

It’s not that scary to be wrong about the subject matter—math, in my case. But it is scary to be wrong about yourself, to learn that you’re not the person you imagine yourself to be. In other words, it’s scary to admit that you don’t match your self-image.

A self-image is a description of what we think we are, want to be, or should be. Your self-image could be that you are a conservative—you wear a bow tie and think left-wing people are all out of touch with reality. Your self-image could be that you are a liberal—you wear a Bob Marley shirt and think right-wing people are out of touch with reality. In my case, my self-image was that I was a math whiz, and I wanted to appear knowledgeable.

When we decide on a self-image, we decide to start thinking, feeling, and acting in ways that match that image. We want to match our self-image in reality. But that can take a lot of work. For example, to match my self-image as a math whiz, I needed to work hard at learning advanced math concepts. But I didn’t do that.

If you aren’t able to match your self-image you have two options:

  • You pretend to live the self-image.
  • You reject that self-image in favor of another.

Pretending to live the self-image

Most people who don’t match their self-image pretend. I used to do the same. I refused to admit that I was actually not a math whiz. As I advanced in the math courses, they got hard. I didn’t want to work hard at math. I wanted the advanced concepts to come easy to me.

Instead of learning and getting better at math, I began to focus on appearing smart in math. The pretender in me would memorize the advanced concepts, but I wouldn’t understand them. Later my desire to look knowledgeable trickled into other aspects of my life. For example, when I showed up at my first job, I wanted to be known as the most knowledgeable person on the team. My self-image was that I was a knowledgeable employee, that I knew most of the things at work. So at my first job, I went around pretending to know things but barely understood them.

My case isn’t unique. There are many people who pretend to be someone they’re not. A lot of people want to look knowledgeable as part of their self-image. That explains why they find it hard to say they don’t know: not knowing doesn’t align with their self-image. When they fail to match that self-image, they start pretending to live their self-image.

Rejecting your self-image for another

The alternative to pretending to look knowledgeable is to admit that you have a false self-image. Once you reject the false self-image, you can pick another self-image that’s more realistic. A realistic self-image allows you to actually acquire knowledge. Your new self-image could include that you don’t understand a lot of things, that you are okay with saying, “I don’t know.”

Self-image is a big driver of how we feel, think and act. For example, if I feel, think, and act like I don’t understand advanced math concepts, then it is easy for me to say I don’t understand advanced math. When we live into that new self-image, we don’t know a lot of things, so that self-image helps us admit the things we don’t understand.

In reality, I needed to let go of my self-image as the math whiz, and create another self-image. I created a new image of myself not as a math whiz, but as a person who wanted to know and understand math. For knowing and understanding math, I needed to know my limitations in advanced math concepts, to spend extra time understanding them, and to practice them.

Understanding the difference between false self-images and real self-images helps us pick a self-image that enables us to know and understand what’s true.

Most of the people who pretend to know a lot of things tend to have a self-deception. They don’t understand what it takes to be competent. They can’t wrap their mind around the amount of work it takes to be competent in any given field. Pretending is easy but understanding takes a lot of work.

Full post here: https://thinkbuthow.com/i-dont-know/

6 Comments
2021/12/07
22:15 UTC

6

CT Does experience play a part in critical thinking? : the Rogers/Rogan episode...

Making good decisions requires discovery of facts, analysis, evaluation and action... and usually time is a factor as well. No one, it must be realized, will ever have 100% information... we can only grab a hold of as many facts as we can and process them in the time we have. A lot is made of the biases we have - the lens by which we see the world around us - but critical thinking should reveal the inconsistencies in that lens allowing us to make the best decision. We cannot eliminate heuristics, nor would we want to, but with critical thinking we can discover where they might need to be altered.

Our experiences will always play a part in our decision-making. Critically thinking about what experiences are important and valuable in any decision is what we want to improve upon. Mr. Rogers, QB with the Green Bay Packers football team, gathered facts, decided which ones were important and evaluated them. One of those gathered facts was a personal discussion with Mr. Rogan who had experience with unpopular medications and their results. Perfectly legitimate to do and to add to what he had already gathered. And he used everything gathered, then, to come to a personal decision that was consistent with all the data he'd obtained.

To agree or disagree with Mr. Rogers' decision is fine, but to condemn the decision because he spoke to someone who had experience with the subject of decision is not. Critically speaking, to agree or disagree with him, we would want to discover all the facts (or at least many of the facts) he used to make the decision, not just one particular experience.

Again, our experiences will always play a part in our thinking critically as they are part of the fact gathering process.

8 Comments
2021/11/10
00:43 UTC

88

Thinking Is a Skill

I had just turned 40, and for the very first time I heard the expression, “Thinking is a skill.”

I thought, “If thinking is a skill, then how come no one ever mentioned it to me?”

My parents never mentioned it. My high school never mentioned it. My college never mentioned it. My professional development seminars never mentioned it. The books I read never mentioned it.

Perhaps it was obvious to others that thinking is a skill, but it wasn’t obvious to me. I thought thinking was different from activities that were obviously skill-based like sports. It was obvious to me that athletes like Lebron James had to learn basketball skills—and practice them—in order to get better at basketball. But I’d always looked at intelligent people like Paul Graham and thought that they were born thinkers. It never occurred to me that they might have learned and practiced how to think. But when I heard thinking was a skill, I realized the people I admired acquired thinking skills in much the same way I’d seen athletes acquiring athletic skills.

I felt like an idiot. For years I’d never made the connection between thinking and other skills.

For the next few weeks, I began thinking this through. If thinking is a skill, I thought, then it must be analogous to other skills. Skills in general have these features:

  • You can acquire skills and get better at them over time.
  • You can be better or worse at skills.
  • You can find better and worse methods for executing a skill.
  • You can practice skills to get better.

So if thinking is a skill, then the following must be true:

  • You can acquire thinking skills and get better at thinking over time.
  • You can be better or worse at thinking.
  • You can find better and worse methods for thinking.
  • You can practice thinking to get better.

To understand these points, let’s start by defining what we mean by ‘thinking.’

What Is Thinking?

People use the word ‘thinking’ to describe activities like calculating, remembering, planning, imagining, and deciding. Among these activities, I’m only interested in the ones that aim at achieving an accurate result. Not all types of thinking aim at accuracy. Imagining or fantasizing counts as thinking, but these activities don’t necessarily aim at representing the way the world actually is. I’m not interested in these types of thinking, but only the ones that aim at accuracy.

The goal of thinking in this sense is to come to know or understand how the world really is. When you’re doing a task like calculating numbers, deciding on a diet to lower your cholesterol, or picking a stock, you are looking to get accurate results. For example, in elementary school, all of us learned a method for adding numbers. In order to add numbers like 18 and 18, you learned to add up numbers in the 1s column and carry any extra digits.

By contrast with this method, if you were just to guess the sum, then you would be inconsistent or inaccurate in your results.

Likewise, suppose your friend recommends that you buy a stock because he notices a long waitlist for the company’s products to arrive and he thinks that the stock price will soar. You buy a few shares based on his recommendation. Later the stock price plummets because the company doesn’t have the funds to run its production line overseas. If you use shallow thinking like this to make investment decisions, then your investment results are going to be inconsistent.

These examples illustrate the type of thinking that I’m talking about here: it’s thinking that aims at achieving an accurate result. Now that we are clear on what thinking is, let’s look at skill.

What Is Skill?

Skills are abilities. In particular, a skill is an acquired ability to do something. There are two types of abilities: abilities we are born with and abilities we acquire. Skills are abilities of this second sort. For example, you are born with the ability to see, but you acquire the skill to read; you are born with the ability to hear, but you acquire the skill to understand a language; you are born with the ability to taste, but you acquire the skill to be a cook or food critic.

If you look at LeBron James, he certainly has exceptional natural abilities like height, ability to jump, and hand-eye coordination, but he won’t be an exceptional basketball player if he doesn't acquire and master basketball skills like shooting, passing, and defense.

Similarly, if you look at Paul Graham, he certainly has exceptional natural cognitive abilities, but he won’t be an exceptional thinker if he doesn’t acquire and master thinking skills like understanding arguments.

In both cases, exceptional performers aren’t born that way; rather, they’ve become exceptional by practicing the right methods over and over.

How do you acquire a skill?

  • First, you find a teacher–a book or person–somebody who knows how to do the thing you want to learn. The teacher gives you the method for acquiring the skill you want.
  • Second, you imitate the method the teacher shows you, and then you practice over and over until you become proficient in the skill.

Here’s an example. Warren Buffett attributes his success to finding the right teacher and imitating his methods. When Buffett went to Columbia Business School, he discovered that Benjamin Graham was teaching value investing. Buffett quickly took notice and started studying Graham’s methods.

Over time, Buffett would meet with Graham for investing advice, read his acclaimed book, The Intelligent Investor, imitate Graham’s methods, and practice them over and over. That’s how Buffett acquired the skill of investing.

Why Is It Important To Work on Thinking Skills?

There are degrees of mastering a skill. Those degrees correspond to better and worse ways of executing the skill. You can be a decent chef or master chef who runs a world-class restaurant; you can be a decent investor or be a professional investor who handles billions of dollars; you can be a recreational basketball player or a professional basketball player who plays at the highest level.

When it comes to outcomes, consistency is the mark of mastery. A novice can get a good outcome once by dumb luck, but can’t replicate the result over the long term. For example, a novice can get lucky and cook a delicious meal once by luck, but it is the mark of a master chef to make an excellent meal most of the time. A novice can get lucky and shoot the basketball great once, but it takes mastery to shoot the basketball great most of the time. A novice can get lucky and pick the right stock to bet on once, but it takes mastery to pick the right stock most of the time.

When it comes to thinking, consistently getting accurate results is important. For example, you can take a guess and find a flaw in someone’s argument. But if you understand the form of the argument and understand the common errors in reasoning, you are bound to consistently arrive at accurate results in your thinking.

I’ve described some of the things that thinking has in common with other skills. Let me highlight a difference: thinking is more general than other skills; it applies to more things.

Different skills have different ranges of application. Some skills are domain-specific; others are general in scope. A soccer goalie has some domain-specific skills to stop the other team from scoring, but running fast is a more general skill that can apply in many sports. For example, a fast runner is also valuable in track, football, and basketball.

Writing is even more general than running. Writing applies much more than domain-specific skills like brainstorming ideas, solving sudoku puzzles, or coding in C++.

Thinking is even more general than writing. Thinking is a meta-skill. You can apply thinking to many more things than even writing: understanding an argument, looking for scientific truth, communicating an idea, solving a problem, and coming up with creative ideas.

Not All Thinking Is Created Equal

If the objective of thinking is accuracy and consistency, then there are better and worse methods for thinking. Let’s look at some examples that show how some methods for thinking can be better or worse:

Calculating numbers

Better Method: You can calculate 2+2+2 by adding each number one by one, or multiplying 2 by 3, or using a calculator. These methods will bring you correct results consistently.

Worse Method: You can guess at an answer. Guessing will bring you inconsistent results. Sometimes your guess will be accurate and sometimes your guess will be inaccurate. In fact, most of the time your guess will be inaccurate.

Baking Cake

Better Method: You can bake a cake by emulating a time-tested method. This method will bring you consistent results.

Worse Method: You can bake based on your hunch without prior knowledge about baking principles like temperature, texture, and time. This method will bring you inconsistent results: the taste, shape, and texture of the cake will be different every time you bake. Sometimes your hunch will make a delicious cake, but most of the time you will find burnt, dry, and disgusting cake.

Picking Stocks

Better Method: You can pick stocks by looking at the company’s financials, forecasting future demand, and reviewing past trends. This method will bring you consistent results.

Worse Method: You can look at tarot cards to decide what stock to pick. This method will bring you gains and losses inconsistently. Sometimes you will just happen to pick the right stock, but most of the time you will have similar results by throwing a dart at some stock tickers and picking one.

The method you use to calculate numbers, bake cakes, and pick stocks can yield results that are consistent or inconsistent. The same is true for thinking methods.

Evaluating a claim

There are better and worse methods for thinking, just as there are better and worse methods for doing other things.

When you’re evaluating claims, for instance, you can use your intuition to determine whether or not a claim is true. Intuition initially seems right to us, but it is vulnerable to cognitive biases. Cognitive biases are judgment shortcuts that help us make quick decisions. Often cognitive biases lead us to inconsistent and inaccurate results. Using your intuition to evaluate claims makes you vulnerable to accepting false claims.

By contrast, you can evaluate a claim by evaluating the reasons to believe that it’s true. Those reasons might come from ordinary people, or they might come from experts.

An expert could be a good starting point to understand the reasons that support a claim, but thinking skills go further than consulting expert opinion. You need to understand how to evaluate the expert’s opinion. You need to acquire thinking skills that enable you to evaluate a claim from an expert or novice.

Those skills include understanding what an argument is, knowing about cognitive biases, knowing about logical fallacies, knowing how and when to use experts, and understanding the need for withholding judgment to improve your decision-making. These methods will bring you consistent results.

Developing Thinking Skills

When it comes to the development of critical thinking skills, it’s not “anything goes”; there’s a method to doing it in a better way. The people who acquire critical thinking abilities and practice them get better results: they are consistently more accurate than people who don’t. Thinking skills help you improve your critical thinking process by coming up with better possible solutions, improving your creative thinking, evaluating and synthesizing different points of view, and optimizing your problem-solving skills.

Some of those methods to improve your cognitive skills are logic, mathematics, probability, and the rules for using language—natural or artificial.

On the journey to develop strong critical thinking skills, you need to know and understand what’s true. They don’t teach these skills in high school. Many times they don’t teach them in college either. This is a shortcoming of higher education which I discuss here. Instead, adult learners have to pick up these skills on their own and practice applying them on their own in everyday life. That’s actually why I started Think, But How?—I wanted to provide adult learners with resources to improve their thought process and become strong critical thinkers.

One way to achieve this is by embarking on the free thinker journey. I write about my quest to see the world for what it is rather than what I want it to be. I hope by reading, learning, and imitating these methods you can optimize your own thinking skills.

You can read the entire post here: https://thinkbuthow.com/thinking-skills/

11 Comments
2021/11/09
17:25 UTC

27

Circular Reasoning: We All Saw Our Parents Doing This

Did you ever have a conversation with your parents like this:

Parent: “It’s time to go to bed.”

Child: “Why?”

Parent: “Because this is your bedtime.”

At the time, you might have felt unsatisfied with their response, but you didn’t know how to argue against them. Knowledge is power, and in this case you were powerless to resist your parents because you didn’t know about logical fallacies—errors in reasoning.

Here’s another example to illustrate the same kind of fallacy:

Circle: “Skydiving is dangerous.”

Me: “Why?”

Circle: “Because it’s unsafe.”

The fallacy Circle commits (the same one committed by the parent in the earlier example) is called circular reasoning or begging the question. Circular reasoning happens when the arguer assumes that the conclusion is true rather than proving that it’s true. To better understand what this means, let’s first go over what an argument is.

An argument is a series of statements that try to prove a point. The statement that the arguer tries to prove is called the conclusion. The statements that try to prove the conclusion are called premises.

In order to prove a conclusion, the premises can’t include the conclusion itself. That would be like trying to prove that what The New York Times says is true by quoting the Times itself. My friend Circle can’t prove that skydiving is dangerous simply by saying it’s dangerous. In other words, he can’t prove it’s dangerous by saying, “Skydiving is dangerous because it’s dangerous.” You can’t prove something true simply by saying it or repeating it.

Rather, to prove something true, you need to bring forward reasons that are independent of the conclusion—reasons that don’t already assume or presuppose that your conclusion is true. For example, to prove that skydiving is dangerous, Circle would need to cite some data about, say, how frequently people get injured or die while skydiving.

When you present an argument you’re supposed to be giving reasons to think that the conclusion is true. But when someone commits the fallacy of circular reasoning, they’re failing to provide any reasons to think the conclusion is true. Here’s another example that illustrates this point:

God exists.

Therefore, God exists.

You can see in this example that there is no reason to believe the conclusion of the argument. Instead, the arguer has simply restated the conclusion--they’re presupposing that their conclusion is true. More precisely, they’re using their conclusion as a premise or presupposition. That’s a way of defining circular reasoning: Circular reasoning occurs when someone uses their conclusion as one of their premises.

Typically circular reasoning isn’t as obvious as this example. Usually, when people commit the fallacy they don’t restate the conclusion verbatim; they instead change the way it’s worded.

Think again about my conversation with Circle. Here’s Circle’s so-called argument:

Premise: Skydiving is unsafe.

Conclusion: Skydiving is dangerous.

Circle says skydiving is unsafe because it’s dangerous. The word ‘unsafe’ is different from the word ‘dangerous.’ The problem is, even though ‘unsafe’ and ‘dangerous’ are different words, they still mean the same thing. In reality, then, Circle is saying that skydiving is dangerous because it’s dangerous; he’s just using a different word for ‘dangerous.’

Changing the word creates the illusion that Circle is presenting real reasons to believe his conclusion that skydiving is dangerous, but in fact, he’s just restating the conclusion.

Here’s another example of this:

The word of God is true.

Therefore, the Bible is true.

Once again, the arguer isn’t providing any reasons to think the conclusion is true, but is simply replacing one expression for another: ‘word of God’ for ‘Bible.’

Here are a few more examples of circular reasoning:

Example #1

“The death penalty is justified because the government has good reason to put someone to death for serious offenses.”

Explanation: In this case, the arguer isn’t providing any reasons that prove the death penalty is justified. They are just restating their conclusion using different words. They’re using ‘has good reason to’ in place of ‘justified.’ The arguer doesn’t give any reasons to think that the conclusion is true.

Example #2

“The death penalty is never justified because taking a human life is always wrong.”

Explanation: In this case, the arguer isn’t providing any reasons that prove the death penalty is never justified. The arguer is just substituting ‘always wrong’ for ‘is never justified.’ The arguer doesn’t give any reasons to think that the conclusion is true, but is simply restating the conclusion using different words.

Example #3

“Smoking is bad because it has a negative impact on your health.”

Explanation: In this case, the arguer isn’t providing any reasons that prove that smoking is bad. The arguer is just replacing ‘bad’ with ‘negative impact.’ The arguer doesn’t give any reasons to think that the conclusion is true, but is simply restating the conclusion using different words.

Let’s go back to the parent example:

Parent: “It’s time to go to bed.”

Child: “Why?”

Parent: “Because this is your bedtime.”

Explanation: The parent is saying, “It’s time to go to bed because it’s time to go to bed.” There is no reason provided to support the conclusion. They’re just restating the conclusion using different words.

Here’s another example of a parent using circular reasoning:

Parent: “Brushing your teeth is healthy.”

Child: “Why?”

Parent: “Because it's good for your teeth.”

Explanation: The above is an example of how some parents explain to their kids why they should brush their teeth. In this example, ‘healthy’ and ‘good’ mean the same thing. The parent is not giving any reasons to believe that brushing your teeth is healthy for your teeth. They’re just restating the conclusion using different words.

How to Disarm Circular Reasoning

All fallacies are errors in reasoning. Circular reasoning in particular happens when the person making the argument assumes their conclusion is true instead of proving it’s true. It’s like a prosecutor making a case by saying, “Mr. Smith committed this felony because he did it.” The prosecutor isn’t giving any reasons to support his conclusion.

There are two steps for disarming circular reasoning:

  1. Get clear on the terms.
  2. Point out that the arguer is simply restating their conclusion, and not providing any reasons to accept it.

Suppose I commit a circular reasoning fallacy: “Circular reasoning is bad,” I say, “because it’s stupid.”

First, ask me to clarify my terms. For example, you can say, “What do you mean by ‘bad’ and ‘stupid’?” Asking for clarification will give you more clarity on what I am trying to say.

Second, you can point out that I’m just restating my conclusion. For example, you could say, “It sounds like ‘bad’ and ‘stupid’ mean exactly the same thing here. In that case, you are not giving me any reasons to believe your conclusion. You’re just restating your conclusion by using different words.”

Circular reasoning is a common fallacy because people simply want you to believe their conclusion without giving any support. The two-step process helps you ask for support for the conclusion, and it also helps you identify and avoid the fallacy.

Read the entire post here: https://thinkbuthow.com/circular-reasoning/

4 Comments
2021/11/02
01:04 UTC

13

Is there a certification test in critical thinking?

Is there a certification exam in critical thinking where I can pass it and earn some type of certification as a critical thinker?

What I've found:

I know that there are quite a few courses offered by universities and other institutions that claim to teach critical thinking and that award a certificate for surviving the class, but I'm looking for some sort of exam-based qualification where I can study on my own, go in, pass the test, and walk out with some sort of certification, degree, or other qualification in critical thinking, without having to sit through lectures or do weekly homework.

There are several exams out there that purport to test critical thinking, such as the California Critical Thinking Skills Test (CCTST) and the Collegiate Learning Assessment (CLA), but these seem to be marketed as diagnostic and/or institutional progress assessments rather than something for individuals to pass and earn a qualification. What I'm looking for is an exam that is qualification or certification-oriented, e.g. you pass the test and earn the title of Certified Critical Thinker Level 3 that you can then put on your resume or business card.

To be clear, I'm specifically not interested in formally enrolling in a critical thinking course or advanced critical thinking course. What I'm looking for is some type of certification or credential where anyone can go and take a test or challenge in critical thinking skills and anyone who passes gets certified.

9 Comments
2021/10/09
15:18 UTC

9

Cognitive Bias: A Feature, Not a Bug

Sometimes people think something is a design flaw, but it’s really a design feature.

Let’s pretend that you are new to Gmail. As a new user, you find yourself complaining about the five-second delay in sending email messages and you call it a bug–a flaw in the software. However, after using Gmail for a few days, you realize that you used the recall email feature several times a month. After using the recall feature, you think to yourself, “Wow! I love Gmail’s feature that allows me to recall my email!”

The Google engineers had an objective in mind: they wanted users to have the ability to recall incomplete and embarrassing emails, and they intentionally designed the app to have that feature.

The Gmail five-second delay might look like a design flaw, but it is a design feature.

Something analogous is true of cognitive biases.

Cognitive biases are judgment shortcuts that predispose us to belong to a group and to act decisively when we have gaps in our information. People think that they’re design flaws in human thought—and with good reason: cognitive biases can damage us and the people around us. They can lead to poor decisions like judging someone based on their religion, school, or background. But, in fact, cognitive biases aren’t design flaws but instead design features of human thought.

Think back to our Gmail example. We can easily get fooled into thinking that Gmail’s five-second delay is a design flaw, but in reality, it is a design feature to give us time to recall an email. We can easily think that cognitive biases are design flaws, but in reality, nature designed these features to enhance our chances of survival and reproduction.

To understand this idea, you need to understand how we evolved the cognitive biases we have.

The Evolution of Cognitive Biases

Imagine that nature is an engineer. For nature to make sure that each species thrives it has two objectives:

1. Members of the species must survive;

2. Members of the species must reproduce.

Because nature has these two objectives, natural selection equips organisms with adaptations that increase the likelihood of (1) and (2). Cognitive biases are among the adaptations that natural selection equipped humans with.

The species that adapt to their environment get to survive and reproduce. For example, when an asteroid hit the earth about 65 million years ago and the conditions on the earth dramatically changed, about 50% of the animals, including dinosaurs, went extinct because those species could not adapt to the new environment. The species that adapted got to survive and reproduce.

Cognitive biases are features that enhance humans’ ability to survive and reproduce. They can be understood in terms of two kinds of dispositions:

1. A disposition to live in groups;

2. A disposition to make snap judgments.

A disposition to live in groups is a feature designed by nature for us to stick together to survive and reproduce. This disposition is one of the reasons why people want to be part of racial, national, religious, sports, city, or hobby groups. A larger group gives us more eyes and ears to spot danger. Also, living in a larger group gives us more chances to find a mate since a larger group has a larger pool of mates.

The disposition to make snap judgments, on the other hand, is a feature designed by nature for us to fill in the blanks when we lack information. This feature enabled humans to survive in hostile environments, so they could act decisively in life-and-death situations with very little information. For example, if you were living in the savannah and you met someone outside your tribe, then with very little information, you labeled them friend or foe. Your whole tribe’s survival depended on your decision to label the outsider correctly without adequate information. Humans that learned to make snap judgments based on facial features, clothing, or other superficial attributes had a better chance of surviving and reproducing.

Since these two dispositions allow humans to survive and reproduce, these are design features. They can nevertheless look like flaws because there are many situations in which they lead to errors in judgment. For example, imagine you’re interviewing an overweight guy who’s poorly dressed. You judge him to be lazy and incompetent based on his looks. It turns out that this man is like a younger Winston Churchill, and you miss out on a great hire.

Managing Cognitive Biases

Many people go a long way to get rid of cognitive biases by reading books, going to seminars, or spending years in meditation. The reality is that we can never get rid of cognitive biases because they’re features that we have as a result of natural selection.

So what do we do then? The best option for us is to accept these design features and learn to manage them. I wrote about 4 steps to manage cognitive biases here. I outline them briefly below.

Step 1: Look for snap judgment warning signs:

  • Do you feel very confident about a judgment you’ve just made?
  • Are you making any decisions because of your disposition to join a group?
  • Ask yourself your real reason for arriving at that judgment.

Step 2: Evaluate your reasons for making the judgment:

  • If your reason for making the judgment is X, then ask yourself, “Is it possible for X to be true, and yet for my judgment to be false? Can I imagine any circumstances in which X is true and my judgment is false?”
  • If the answer is yes, then ask yourself a further question, “Is it likely for X to be true and the judgment false?” If the answer is no, then you need to withhold judgment till you get better evidence.

Step 3: Seek reasons that actually establish that your judgment is accurate or inaccurate:

  • In step 2, you are making sure your initial reasons actually support your judgment.
  • If you find that those reasons don’t support your judgment, then you need to collect more evidence to make an accurate decision.

Step 4: Continue to withhold judgment until you get more evidence:

  • If the evidence that you have available is not sufficient to support the conclusion, default to withholding judgment.

You can read the entire post here: https://www.thinkbuthow.com/bias-feature/

8 Comments
2021/09/17
19:25 UTC

4

Are cultures based on lies more hierarchical or less hierarchical than others?

My first idea is that they are less hierarchical than other cultures, since subordinates can get away with stuff just by lying about it. However, if the culture is based on lies, everyone is encouraged to kiss the boss's ***, subordinates just follow instructions not knowing what else to do, and the boss will try to restore order by responding to the lies with the factory like discipline, so it could be that cultures based on lies are more hierarchical than other cultures.

View Poll

26 Comments
2021/09/14
17:44 UTC

20

What Is a Fallacy?

Suppose I ask you to multiply two large numbers–say 12,653 and 65,321. How would you get the correct answer? You’d probably use a calculator or the good old multiplication algorithm you learned as a kid. One thing is clear: if you don’t use the correct method, then you’re not guaranteed to get the correct answer.

Suppose now that I ask you to defend some claim that you believe–that I ask you to give me reasons, in other words, to believe that the claim is true. What’s true in the multiplication case is also true here: if your reasoning doesn’t follow a correct method, then you’re not guaranteed to get a correct conclusion.

Reasoning, or argumentation, is the process of supporting a statement by appeal to other statements. The statement you’re trying to support is called the conclusion, and the statements that are supposed to support it are called premises. Reasoning can be correct or incorrect in just the way that mathematical calculation can. When reasoning is performed incorrectly, we say that it commits a fallacy.

A fallacy is an error in reasoning.

The telltale sign of a fallacy is this: even if your premises are true, they still tell you nothing about whether or not your conclusion is true. Let’s look at an example. Here are two arguments:

Fallacious Argument A

  1. If it’s 2021, then it’s the 21st Century Premise (true statement)
  2. It’s the 21st Century Premise (true statement)

Therefore, it’s 2021 Conclusion (true statement)

Fallacious Argument B

  1. If it’s 2016, then it’s the 21st Century Premise (true statement)
  2. It’s the 21st Century Premise (true statement)

Therefore, it’s 2016 Conclusion (false statement)

Argument A and Argument B have the same form. We can represent that form as follows:

Affirming the Consequent (Fallacy)

If P, then Q

Q

Therefore, P

Here ‘P’ and ‘Q’ are variables. In Argument A, the variable P has the value ‘it’s 2021’ and the variable Q has the value ‘it’s the 21st Century’. In Argument B, the variable P has the value ‘it’s 2016’ and the variable Q has the value ‘it’s the 21st Century’.

When we plug in these values for the variables, we end up with true premises in both of the arguments: it’s true that if it’s 2021, then it’s the 21st century; it’s true that if it’s 2016, then it’s the 21st century, and it’s true that it’s the 21st century.

Both arguments, then, have true premises. If we reason correctly from true premises, then we should arrive at a true conclusion every time. By analogy, if we correctly execute a multiplication algorithm then we should arrive at the correct product every time.

But notice what happens when we reason by affirming the consequent: sometimes true premises yield a true conclusion, and sometimes they don’t. This shows us that reasoning in this way is unreliable. Even if you have true premises, those premises still tell you nothing about whether or not the conclusion is true.

That’s why we call this form of reasoning a fallacy. It’s an example of incorrect reasoning: even if the premises are true, they still don’t give you any reason to accept the conclusion.

You can contrast affirming the consequent with a correct form of reasoning called modus ponens. Here’s an example:

Valid Argument (Modus Ponens)

  1. If it’s 2021, then it’s the 21st Century Premise
  2. It’s 2021 Premise

Therefore, it’s the 21st Century Conclusion

What makes this argument valid is that if its premises are true, then its conclusion is guaranteed to be true. What secures this guarantee is the argument’s form which we can represent as follows:

Modus Ponens (Valid)

If P, then Q Premise

P Premise

Therefore, Q Conclusion

If we fill in values for P and Q that make the premises of the argument true, then it is impossible for the conclusion to be false. That’s what makes an argument valid.

By contrast, we’ve seen that with a fallacy, even if the premises are true, it’s still possible for the conclusion to be false. That’s what makes fallacies unreliable forms of reasoning.

Here are some common fallacies:

  • Appeal to Authority Fallacy: Appeal to authority arguments look to support a claim by appeal to the person who’s making the claim. For example, if I say that there is an afterlife because Aristotle believes in it, this is a fallacy called the appeal to authority.
  • Appeal to Popularity Fallacy: Appeal to popularity happens when someone makes a claim based on popular opinion or on a common belief among a specific group of people. For example, if I say that there is an afterlife because most people believe in it, this is a fallacy called the appeal to popularity. This is a fallacy because you believe something to be true since it is a popular opinion not because there is a reason to believe that.
  • Ad Hominem Fallacy (also known as a personal attack): Ad hominem means “to the person” in Latin. Ad hominem arguments look to falsify an opponent’s argument by attacking the arguer. For example, “Since Hitler is evil, whatever he says is false.” A claim’s truth or falsity doesn’t depend on who’s making it. Hitler is a bad person, but that doesn’t mean that everything he says is false. (Conversely, just because people are good, that doesn’t mean everything they say is true. Even good people can be wrong.) Dismissing a claim simply because a bad person says it is an example of Ad hominem.
  • Hasty Generalization Fallacy: A generalization is stronger or weaker depending on the size of the initial sample. Hasty generalizations are weak generalizations. A generalization is hasty when we endorse a general claim without having observed a sample large enough to be confident that the claim is true. For example, if someone says, “All the parrots I’ve ever seen are yellow, so all parrots must be yellow,” then they are making a hasty generalization based on seeing a small sample.
  • Straw Man Fallacy: The straw man is a logical fallacy that replaces something (a person, a viewpoint, an argument) with a distorted version that blows the opponent’s position out of proportion to make it easier to attack. For example, “Wife: “I’d rather go to a beach than New York City.” Husband: “Why do you hate New York City?” The wife never said that she hates New York City. The husband misrepresents what she says to make her preferences seem more extreme than they are.

You can read the entire post here: https://www.thinkbuthow.com/fallacy/

10 Comments
2021/09/10
23:27 UTC

12

Can a person think their way out of lack of self-discipline and will power, laziness, procrastination, instant gratification and bad thinking patterns?

And how does thinking by self compare to talking with a friend or a therapist or reading a book?

12 Comments
2021/08/31
07:58 UTC

20

Critical thinking for kids

I have been thinking a bit about what I can leave for my two boys (ages 9 and 11), and while I am not a Dad that will leave them woodworking skills, or how to fish. I am a small business owner/chef, and will leave them some of the intangibles of being around that environment.

However, what I have been pondering recently is that what I really would like to leave them, or imbue them with, would be how to think critically, to ask questions, be curious and how to utilize 1st principle thinking in their lives.

I think more than ever we need to plant this seed in our children.. and so my question is; Where would you begin with that for kids? are there any books that you know of that would be good starting points? any apps or even online courses ?

I am going to need to map this out, and it is a long play, but I would like to start now.

thanks all!

11 Comments
2021/08/25
19:53 UTC

19

4 Steps To Manage Cognitive Bias

People always said I was a calm guy. I never got angry or raised my voice. But it all changed when my daughter turned 3.

Like many 3-year-olds, she was testing her limits. She would flat out ignore me when I told her to do something, or she would look me in the eye and start drawing on the furniture. I started getting angry at her. I’d shout at her, she’d cry, and I’d feel sick to the stomach. But I couldn’t stop myself.

“I better look for anger management books,” I thought.

Anger is an emotion, and emotions, in general, belong to a rapid response system that helps us respond to changes in our environment that could impact our survival or well-being. Anger, in particular, is a natural response to threats. It’s a rapid response that helps us defend ourselves from an attack. Emotions like anger evolved to enable us to survive. It’s natural for us to experience them, but they need to be managed correctly or else they can have a negative impact on us and the people around us.

What Cognitive Biases and Emotions Have in Common

Cognitive biases are similar to emotions. They belong to a rapid response system that helps us to fill in gaps in our information and act decisively despite those gaps. We all have them. We have evolved to have both anger and cognitive biases to survive and reproduce.

4 Steps to Managing Cognitive Biases

Psychologists have developed techniques for managing emotions. It’s possible to construct analogous techniques for managing cognitive biases. Here are steps to manage cognitive bias which are based on the anger management strategies outlined by the American Psychological Association:

Step 1: Look for fast-thinking warning signs: Do you feel very confident about a judgment you’ve just made? Relax and ask yourself your real reasons for making the judgment. What reasons have you considered to arrive at that judgment? If your goal is to know and understand what’s true, making a snap judgment is not going to help you achieve it. Snap judgments are, by definition, decisions made quickly on the basis of very little information.

Suppose, for instance, that you are interviewing someone for a job. If you make your decision to hire or not within the first two minutes of meeting the candidate, then you’ve made a snap judgment based on a superficial trait like the way they dress, or speak, or look. Know that your first impression probably reflects a mental shortcut that is not going to help you make an accurate judgment. This will be harmful to the candidate, to you, and to your company.

Step 2: Evaluate your reasons for making the judgment: If your reason for making the judgment is X, then ask yourself, “Is it possible for X to be true, and yet for my judgment to be false? Can I imagine any circumstances in which X is true and my judgment is false?” If the answer is yes, then ask yourself a further question, “Is it likely for X to be true and the judgment false?” If the answer is no, then you need to withhold judgment till you get better evidence.

For our interview example, ask yourself, “Why am I making this judgment about this candidate?” Is your hiring decision based on the candidate’s ability to get the job done, or is it based on some other factor such as their appearance?

Step 3: Seek reasons that actually establish that your judgment is accurate or inaccurate: In step 2, you are making sure your initial reasons actually support your judgment. If you find that those reasons don’t support your judgment, then you need to collect more evidence to make an accurate decision. This is an essential step for any worthwhile decision that has a long-term impact.

Thinking back to the interview example: To collect more evidence to make your hiring decision, you need to ask the candidate a series of questions or ask them to complete a series of tests that will reveal whether they have the needed qualifications. You might need to schedule a second interview or get feedback about them from your co-workers to form a more complete picture of their performance.

Step 4: Continue to withhold judgment until you get more evidence: A free thinker’s goal is to know and understand what’s true. If the evidence that you have available is not sufficient to support the conclusion, default to withholding judgment.

On this point, our interview example poses a challenge because you have to decide on a candidate. If you’re having trouble making a decision based on the information you’ve gotten so far, then maybe you have to schedule a second interview to get more. If you find that your interviews often leave you with inadequate evidence to make a decision, then you need to rethink your interview process. You need to re-engineer the process to secure better evidence in the future.

Read the full post here: https://www.thinkbuthow.com/manage-biases/

3 Comments
2021/08/23
19:36 UTC

23

Cognitive Bias: Our Craving to Judge

We all crave sugar.

Humans need sugar to live, so evolution gave us the craving for it. Sugar was relatively scarce in the environment in which humans evolved, so our metabolism adapted to convert excess calories from sugar quickly and efficiently into fat.

Fast forward 100,000 years: sugar is now abundant in our environment. But human metabolism hasn’t changed. It still quickly and efficiently converts sugar into fat. Since sugar is now abundant, people have gotten fatter and unhealthier.

Something analogous is true of human thought.

Humans need to survive and reproduce, so evolution gave us the desire for group membership. 100,000 years ago, we had many reasons to join a group: protecting ourselves from predators and competing tribes, finding food and other resources, finding potential mates, and so on. So our brains evolved tendencies to think, feel, and act in ways that enabled us to live successfully in groups. Those tendencies include filling in the blanks with guesses, making snap judgments, stereotyping people on character traits, seeking approval from our tribe, searching to support our existing beliefs, and wanting to be right.

Just like sugar cravings, we have these cognitive cravings because at one time they enabled members of our species to survive and successfully reproduce.

Fast forward 100,000 years: Our environment is safe, we have laws, we are protected from predators and other people, we have abundant information at our fingertips, and we have access to abundant food and potential mates. But human cognition hasn’t changed. We still have the same cognitive cravings.

There’s a more familiar term for these cognitive cravings. In psychology, they’re called “cognitive biases,” or “cognitive illusions,” or “cognitive distortions.”

Cognitive biases are judgment shortcuts. They help us to fill in gaps in our information and act decisively despite those gaps. That ability to act decisively helps us act quickly in life and death situations. But let’s be frank: for most of us, these situations are as rare as seeing a shooting star.

We all have cognitive biases. Here are some common cognitive biases:

  • Confirmation bias: A tendency to filter information in a way that confirms what we already think.
  • Optimism bias: A tendency to be over-optimistic, and to underestimate the probability of undesirable outcomes.
  • Self-serving bias: A tendency to claim more responsibility for successes than failures.
  • Availability bias: A tendency to overestimate the likelihood of events based on recent and emotional memories.
  • Anchoring bias: A tendency to rely too much on one trait or piece of information.

How Cognitive Biases Damage Us

We now recognize that these cognitive cravings can damage us and the people around us.

Imagine you just walked into a job interview. You might think that you can impress the interviewers with your experience and problem-solving skills, and you rehearse a number of things to say in response to the questions you expect from them. It turns out, however, that the interviewers make a snap judgment about you: they make their decision in less than a second based on your facial features–a scar on your face, your eye color, the distance between your eyes, and the shape of your mouth–features that reveal nothing about your qualifications.

Princeton University psychologist Alex Todorov did a study about trustworthy faceswe are wired to make a snap judgment about a person within 100 milliseconds of meeting them.

Snap judgments helped our prehistoric ancestors survive when they saw a new face on the savannah, but it doesn’t help us in the modern world. Today, the negative impact of snap judgments has spread across the world. People make snap judgments based on sex, race, accent, age--the list goes on.

For example, in a study by Cornell University’s Justin Gunnell, unattractive people tend to get longer and harsher prison sentences than attractive people–on average 22 months longer.

There are many examples of how cravings to make snap judgments impact our society for the worse. So how do we manage these cravings?

How To Deal with Cognitive Biases: First Know, Then Manage

We all have these cognitive cravings because they are byproducts of human evolution. We can’t eliminate them any more than we can eliminate our craving for sugar. The best we can do is take steps to manage them and the effects they have on our lives.

How do we manage them? There are 2 steps:

Step 1. Accept that cognitive biases are dispositions of the human brain. We can’t eliminate them but only manage them.

Step 2. Stop acting on them blindly. Instead, reflect carefully on the judgments you make. Understand exactly why you make them before acting on them.

Think back to the interview example. If you’re interviewing a candidate you might not be able to help make a snap judgment about them. But try to understand the basis of that judgment: Is it a facial feature? Is it something about the way the person dresses or talks? Is it something that is in any way relevant to evaluating the person’s qualifications for the job? If not, then put the judgment aside and focus on what matters.

In addition, be open to other people’s evaluations and criticisms about your judgment. Other people might have had the same initial impression, and may have been able to judge the candidate’s qualifications more clearly in the moment.

Read the rest of the article here- https://thinkbuthow.com/cognitive-bias/

11 Comments
2021/07/26
21:24 UTC

29

The Benefits of Withholding Judgement

Why withhold judgement?

To make good judgements, you first need to assess the situation, and that means you need to get hold of good evidence about it. Until you have that, don’t make a judgement. Instead, withhold judgement (perhaps indefinitely) until decisive evidence becomes available.

By withholding judgement, you are in a position to see the claim from all angles and strengthen your commitment to knowing and understanding what’s true.

Benefits:

  • It helps you avoid making rash decisions. You are able to take your time and evaluate evidence for and against a claim fairly.
  • It makes you a better listener. You are able to pay full attention and listen to arguments for or against a claim instead of rushing to judgement.
  • It makes you more flexible. You can change your mind if new information comes to light.
  • It makes you more open-minded. You are open to new ideas if you are not busy defending your current point of view.
  • It makes you more humble. You realize that you make mistakes, and take care to avoid them and course-correct when needed.

If there is no urgency to a claim, then there is no pressure to judge right away. For example, suppose there is no pressure to make a judgement about the claim that God exists. You can withhold judgement until you get decisive evidence either way. You can afford to be patient when it comes to evaluating the evidence. This leaves a window open to accepting or rejecting the claim as the evidence becomes available.

Read the full article here: https://thinkbuthow.com/rash-decisions/

8 Comments
2021/07/21
22:27 UTC

30

Appeal to Authority Fallacy: How to Avoid It

My friend Gullible and I were having a discussion:

Gullible: “I heard from Andrew Yang that universal income is the best solution to fight poverty.”

Me: “Why do you believe that?”

Gullible: “Yang is successful and famous, so he must be right.”

This is a common informal fallacy called “The appeal to authority.”

What Is an Appeal to Authority Fallacy?

Appeal to authority arguments look to support a claim by appeal to the person who’s making the claim. Since claims are true or false regardless of who makes them, the person who’s making the claim is irrelevant to evaluating the claim’s truth or falsity. That’s why appeal to authority is categorized among the fallacies of relevance: it appeals to irrelevant information in an effort to get people to endorse a claim.

Here are some examples of appeal to authority:

Example #1

Person A: “Appeal to authority is the weakest form of argument.”

Person B: “Why do you think that?”

Person A: “Aristotle said so.”

Explanation: Person A is appealing to Aristotle to prove that appeal to authority is the weakest form of argument. Appeal to authority is in fact among the weakest forms of argument, but the reason it’s weak isn’t that Aristotle said so. The reason it’s weak is that saying something doesn’t make it true—not even if it’s Aristotle who said it.

Example #2

“The Pope said we should not use contraception. Since the Pope is a religious authority, it must be true.”

Explanation: It is a fallacious argument to believe a person instead of evaluating his claim about contraception. If the Pope’s claim is true, it’s not true simply because he says so.

Example #3

“According to Planned Parenthood, women should have the ability to choose abortion. Since it’s Planned Parenthood, it must be right.”

Explanation: You can’t accept the claim simply because Planned Parenthood said it. They could be right or wrong, but we can’t accept their claim simply because they provide reproductive health care. If their claim is true, it’s not true simply because they say so.

Example #4

“Michael Jordan said that for fitness, we should make it mandatory for all children in school to play basketball. Since Michael Jordan was a great basketball player, it must be true.”

Explanation: You can’t accept a claim simply because it’s Jordan. The claim about making basketball mandatory for all children for fitness needs to be evaluated before accepting, rejecting, or withholding judgement about the conclusion. If Jordan’s claim is true, it’s not true simply because he says so.

Appeal to Authority in Advertising

Advertisers have long used appeal to authority to promote their products. They understand that the public can jump on the bandwagon knowing an authority approves their product. Trident Gum used this well-known example:

“Four out of five dentists surveyed recommend sugarless gums for their patients who chew gum.”

Likewise, Wheaties used a similar ad featuring Michael Jordan. Its message: Wheaties is the best way to start the day because Michael Jordan eats Wheaties for breakfast.

Here’s an example of a New York Ad agency using doctors to sell cigarettes:

“More doctors smoke Camels than any other cigarette!”

How to Disarm Appeal to Authority

Most of the time, people appeal to authority in argumentation because it’s easy. It takes critical thinking skills to argue for or against an argument or claim, and most people aren’t skilled at doing that, so they fall back on something that’s more familiar, easy, and comfortable: evaluating people instead of arguments and claims.

If someone accepts a claim because of who said it, then point out that the claim still needs to be evaluated before accepting it. By focusing attention back on the claim, you’re bringing the fallacy to light and bringing the discussion back to where it belongs: the evidence in support of the claim.

You can read the full article here- https://thinkbuthow.com/appeal-to-authority/

15 Comments
2021/07/16
18:43 UTC

15

Appeal to Popularity: Don’t Jump on the Bandwagon

“Be careful when you follow the masses. Sometimes the ‘M’ is silent.” -Anonymous (perhaps Unanimous)

I had a conversation with my friend, Majority:

Me: “Buying a house is not always the best investment.”

Majority: “Most people want to buy a house, so it must be the best investment. It’s just common knowledge.”

This is an example of a fallacy in informal logic called “Appeal to Popularity.”

What Is Appeal to Popularity?

Appeal to popularity happens when someone makes a claim based on popular opinion or on a common belief among a specific group of people. My friend Majority thinks that buying a house is the best investment because it’s a popular view. Because it’s popular, he reasons, it must be true.

Appeal to popularity is an informal fallacy because the popularity of a claim doesn’t provide evidence that the claim is true. Something is not automatically true if it’s popular. If I believe something, that doesn’t make it true. Likewise, if the majority of the people believe something, that doesn’t make it true.

For example, at one time, everyone believed that the sun orbited the earth, but that claim was false.

People are motivated to commit the fallacy because of the bandwagon effect. The bandwagon effect is a cognitive bias. Humans are social animals, and it is common for them to fall for popular beliefs. We have psychological tendencies that promote group living. It’s easier to live in a group if you share the same beliefs as most people in that group, so humans evolved a tendency to believe what most of the people around them believe.

Here’s how the appeal to popularity fallacy looks:

Everyone thinks that X.

So, X must be true.

When someone uses the appeal to popularity fallacy, they will cite a belief that many, most, or all people hold and claim it to be true. This is a fallacious argument because, as we’ve seen, the majority opinion doesn’t always translate into truth. The majority of people can believe something false.

There's a difference between truth and belief. You can believe things that are false, and you can disbelieve things that are true. The number of people who believe it doesn’t matter. To be true, a claim has to match how the real world is. It’s not enough for people to believe it.

Because the truth is different from belief, citing what people–even the majority of people–believe is irrelevant to evaluating a claim’s truth or falsity.

Here are some more examples of appeal to popularity:

Example #1

Most people believe that there is life after death, so there is life after death.”

Example #2

Most people no longer believe that there is life after death, so there is no life after death.”

Example #3

“Most people believe that COVID-19 was not grown in the lab, so it must be true.”

Example #4

“Most people believe that COVID-19 was grown in the lab, so it must be true.”

Explanation: In each of the four examples, you can see that the claim is based on popular opinion. The claims have nothing to do with how the actual world is, but they instead appeal to the people who endorse the claims. Each claim is based on popular opinion, but that doesn’t help us get to the truth. These are examples of bad arguments.

Appeal to Popularity in Marketing

Psychologists have long known that group thinking is one of the key components in the decision-making process. From the early 20th century, New York advertisers used appeal to popularity as a tactic to persuade groups of people to buy their products: “We are the number one seller of x, so we are the best.”

How to Disarm the Appeal to Popularity

The way to counter the appeal to popularity is to explain that the majority can be wrong. It helps to have a clear example that illustrates this. For example, at one time everyone believed that the sun orbited the earth, but it turned out that everyone was wrong.

What Should You Do Instead?

A free thinker is interested in knowing and understanding what’s true. To avoid falling for the appeal to popularity, free thinkers understand that for critical thinking, they take the source of the claim out of consideration and only focus on the claim itself.

The evidence given with any particular claim helps a free thinker decide to either accept, reject, or withhold judgement on any claim.

If you'd like to read the entire post and support, go to https://thinkbuthow.com/appeal-to-popularity/

5 Comments
2021/07/08
17:54 UTC

22

Ad Hominem Fallacy

What Is an Ad Hominem Fallacy?

Ad hominem arguments look to falsify a claim by attacking the person who’s making the claim. Since claims are true or false regardless of who makes them, the person who is making the claim is irrelevant to evaluating the claim’s truth or falsity.

For example, if Hitler claims that 2 + 2 = 4, that doesn’t automatically make the claim false. Hitler is a bad person, but that doesn’t mean that everything he says is false. Dismissing a claim simply because a bad person says it is an example of Ad hominem.

When people commit an ad hominem fallacy, they are mistaking criticism of a person with criticism of a claim or an argument. The Latin term ‘Ad hominem’ means “to the person.” When people commit an ad hominem fallacy, they’re attacking the arguer in an effort to falsify the arguer’s claim. It’s a fallacy because attacking the person can’t succeed in falsifying the claim. The truth or falsity of the claim is completely independent of the person who makes it.

Here’s what ad hominem looks like:

Alex: “We should have free college for all, so more people can get a college degree.”

Jen: “No, college shouldn’t be free. You’re just a hippie.”

This is a logical fallacy because attacking the person with abusive remarks or name-calling does not prove the claim to be false. Even if Alex is a hippie, that doesn’t give us any reason to think that what Alex says is false. Alex could just as easily say that 2 + 2 = 4. Would Jen reject that claim as well?

An argument is bad because of its logic, not because of the person who makes it.

Ad hominem is so common because evaluating people is so familiar to us. It’s one of the first things we learn to do in childhood. Because that way of evaluating things is so familiar, people tend to default to it even when it’s irrelevant.

Here’s are some examples of personal attacks, aka ad hominem:

Example #1

Anderson Cooper said, “We should eliminate the death penalty because it is inhuman,” but Cooper is a left-leaning political head, so his claim must be false.

Explanation: It doesn’t matter what political party Anderson Cooper endorses; we are evaluating his claim about the death penalty, so we need to remove Cooper from the equation to avoid an ad hominem fallacy.

Example #2

James said, “College is a waste of time.” Since James didn’t go to college, he has to be wrong.

Explanation: Again, we’ll need to look at James’s argument rather than his background. He could very well be wrong, but we can’t dismiss his argument based on whether or not he went to college.

Example #3

Trump said, “The USA is the best place to start a business because the tax rates are so low for small businesses.” Since Trump is a pig in human clothing, this claim is false.

Explanation: You can’t reject an argument simply because it comes from someone you dislike. The argument itself needs to be evaluated before accepting, rejecting, or withholding judgement about the conclusion.

Example #4

Rob says that we shouldn’t have affirmative action. But Rob isn’t a minority, so we should reject that claim.

Explanation: Again, we will need to look at Rob’s claim rather than his background. He could very well be wrong, but we can’t dismiss what he says simply because of his genetics.

Example #5

Sally says we should help the poor, but she grew up in a rich family, so doesn’t know what she’s talking about.

Explanation: We will need to evaluate Sally’s claim rather than her upbringing. She could very well be wrong, but we can’t dismiss what she says simply because of her family’s economic circumstances.

How to Disarm Ad Hominem

Most of the time, people resort to ad hominem attacks because evaluating people is something they learned from an early age. It takes skills to argue against a good argument or claim, and most people aren’t skilled at doing it, so they fall back on something that’s more familiar, easy, and comfortable: evaluating people instead of arguments and claims.

If someone attacks you and not your claim, then point out that it is not the person that is making the claim, but the claim in question needs to be evaluated. By focusing attention back on the claim, you’re bringing the fallacy to light and bringing the discussion to a more productive place.

You can read the entire post here- https://thinkbuthow.com/ad-hominem/

10 Comments
2021/06/22
21:03 UTC

5

Examples for Ad Hominem help

I am writing a piece on Ad Hominem fallacy(Personal attack). As in most things, examples help us seep in the concept more.

So far I have these examples.

Example #1

Mark said, “We should eliminate the death penalty.”, but Mark is part of the KKK so I must disagree with him.

Example #2

James said, “College is not the best investment of your time and money,” and he never went to college so you can't make that claim.

Example #3

Trump said “The USA is the best place to start a business,” since Trump is a liar I can’t accept anything he says so it must be false.

Would you change something about these examples? Do you have any strong examples that I can use in the post?

Thank you in advance.

11 Comments
2021/06/10
16:45 UTC

48

The Straw Man Fallacy

What is a Straw Man?

The straw man is a logical fallacy that replaces something (a person, a viewpoint, an argument) with a distorted version that blows the original out of proportion to make it easier to attack.

The term “straw man” is based on a metaphor. The arguer doesn’t attack the “real man,” that is, the real person, argument, or claim. The arguer instead constructs a fake man made of straw, and then attacks that straw man. The arguer then claims to have defeated the real person, argument, or claim, even though the arguer hasn’t said anything about it. That’s where the fallacy comes in: you can’t defeat something you don’t deal with at all. The arguer can’t win the argument because he hasn’t dealt with the real person, argument, or claim; he has dealt solely with the straw man.

People use straw man fallacies knowingly or unknowingly to avoid challenging a stronger opponent. Politicians often make use of the straw man to attack opponents. They create a distorted image of an opponent’s position or an opponent’s argument by magnifying some things and minimizing others, then attack the distorted image.

Here’s an example that illustrates what a straw man fallacy looks like:

Wife: “I’d rather go to a beach than a big city.”

Husband: “Why do you hate big cities?”

Explanation: The husband has constructed a straw man of the wife’s claim. The wife never said that she doesn’t like big cities. The husband instead misrepresents what she says to make her preferences seem more extreme than they are. 

Many people construct straw men accidentally because the misrepresented view resembles the original. A straw man can even fool the person who made the original claim: the wife might get tricked into defending the straw man that her husband has constructed, and never steer the conversation back to her original claim. 

Here are some more examples of a straw man argument:

Example #1:

Mom: “I want you to leave your phone on the kitchen counter at night so you can get a better night’s sleep.”

Son: “You never want me to talk to my friends.” 

Explanation: Mom never mentioned anything about her son not talking to friends. The son is attacking her request by distorting it. 

Example #2: 

Person A: “Nuclear energy provides a safe, reliable way of combating climate change.” 

Person B: “I don’t want nuclear waste in my backyard!”

Explanation: A real argument against Person A’s claim would try to show that nuclear energy is not a safe, reliable way to combat climate change. Instead of trying to show that, however, Person B attacks another claim that is not relevant to what Person A said. Person A didn’t say anything about storing nuclear waste in Person B’s backyard. Person B is taking a complex claim and replacing it with a simpler, unrelated claim that’s easier to attack. 

Example #3:

John: “The new $6 Trillion federal government budget is going to inflate the US dollar because it’s just printing more money.”

Explanation: Whether or not the budget will trigger inflation is a complex issue. By focusing on just one part of the budget, John is oversimplifying the real-world complexities in order to make the budget easier to attack. In particular, John doesn’t take into consideration other parts of the budget that aim to grow revenue by raising taxes. 

How to Disarm a Straw Man

Knowing how to disarm a straw man is an important critical thinking skill. It involves describing the difference between the real thing and the misrepresentation of it. In other words, disarming a straw man has two components: 

  1. Describing the real issue (person, view, or argument); 
  2. Explaining why the issue (image, view, or argument) that’s being attacked isn’t the real one.

For example, to disarm her husband’s straw man, the wife can reply as follows: “I said that I prefer the beach over the big city; I never said that I hate big cities.”

To disarm her son’s straw man, the mom should reply as follows: “I said that I want you to sleep better by leaving your phone on the counter; I never said that I don’t want you to talk with your friends.”

To disarm Person B’s straw man, Person A should reply as follows: “I said we should look into nuclear energy as a safe and reliable way to combat climate change. I didn’t say anything about storing nuclear waste in your backyard.”

To disarm John’s straw man, you can reply as follows: “The budget is very complex. There are parts of it that aim to grow revenue by raising taxes. It might be the case that the revenue generated by higher taxes is enough to offset inflation.”

You can read the full post here.

12 Comments
2021/06/08
21:53 UTC

10

The Straw Man Fallacy- What to include

I am writing a post about straw man fallacy. There are two forms of the fallacy.

The original form: Misrepresents the opponent’s position.

The newly added form (selection form): Focuses on the partial and weaker representation of the opponent’s position.

Would you be interested in reading about the original, new form, or both?

8 Comments
2021/06/02
20:32 UTC

Back To Top