Skip to main content
Statistics LibreTexts

1.6: "Research shows that..."

  • Page ID
    17294
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)

    You may have heard someone say, “Research shows that…”  What this means is that an experimenter used the scientific method to test a Research Hypothesis, and either supported that Research Hypothesis or did not support that Research Hypothesis.   However, much of what you read online is not supported by research using the scientific method.  Instead, people who don’t know what an IV or DV is try to convince you that their ideas are correct. 

    As a group, scientists seem to be bizarrely fixated on running statistical tests on everything. In fact, we use statistics so often that we sometimes forget to explain to people why we do. It’s a kind of article of faith among scientists – and especially social scientists – that your findings can’t be trusted until you’ve done some stats. Undergraduate students might be forgiven for thinking that we’re all completely mad, because no one takes the time to answer one very simple question:

    Why do you do statistics? Why don’t scientists just use common sense?

    There’s a lot of good answers to it,1 but a really simple answer is: we don’t trust ourselves enough. Scientists, especially psychologists, worry that we’re human, and susceptible to all of the biases, temptations and frailties that humans suffer from. Much of statistics is basically a safeguard. Using “common sense” to evaluate evidence means trusting gut instincts, relying on verbal arguments and on using the raw power of human reason to come up with the right answer. Most scientists don’t think this approach is likely to work.

    In fact, come to think of it, this sounds a lot like a psychological question.  In this behavioral statistics textbook, it seems like a good idea to dig a little deeper here. Is it really plausible to think that this “common sense” approach is very trustworthy? Verbal arguments have to be constructed in language, and all languages have biases – some things are harder to say than others, and not necessarily because they’re false (e.g., quantum electrodynamics is a good theory, but hard to explain in words). The instincts of our “gut” aren’t designed to solve scientific problems, they’re designed to handle day to day inferences – and given that biological evolution is slower than cultural change, we should say that our gut is designed to solve the day to day problems for a different world than the one we live in. Most fundamentally, reasoning sensibly requires people to engage in “induction”, making wise guesses and going beyond the immediate evidence of the senses to make generalizations about the world. If you think that you can do that without being influenced by various distractors, well, I have a bridge in Brooklyn I’d like to sell you.

    The Curse of Belief Bias

    People are mostly pretty smart. Our minds are quite amazing things, and we seem to be capable of the most incredible feats of thought and reason. That doesn’t make us perfect though. And among the many things that psychologists have shown over the years is that we really do find it hard to be neutral, to evaluate evidence impartially and without being swayed by pre-existing biases.

    But first, suppose that people really are perfectly able to set aside their pre-existing beliefs about what is true and what isn’t, and evaluate an argument purely on its logical merits. We’d expect 100% of people to say that the valid arguments are valid, and 0% of people to say that the invalid arguments are valid. So if you ran an experiment looking at this, you’d expect to see data like this:

    Table \(\PageIndex{1}\)- Valid Arguments and Feelings
      Conclusion feels true Conclusion feels false
    Argument is valid 100% say “valid” 100% say “valid”
    Argument is invalid 0% say “valid” 0% say “valid”

    If the data looked like this (or even a good approximation to this), we might feel safe in just trusting our gut instincts. That is, it’d be perfectly okay just to let scientists evaluate data based on their common sense, and not bother with all this murky statistics stuff. However, you have taken classes, and might know where this is going . . .

    In a classic study, Evans, Barston, and Pollard (1983) ran an experiment looking at exactly this. What they found is show in Table \(\PageIndex{2}\); when pre-existing biases (i.e., beliefs) were in agreement with the structure of the data, everything went the way you’d hope (in bold).  Not perfect, but that’s pretty good. But look what happens when our intuitive feelings about the truth of the conclusion run against the logical structure of the argument (the not-bold percentages):

    Table \(\PageIndex{2}\)- Beliefs Don't Match Truth
      Conclusion feels true Conclusion feels false
    Argument is valid 92% say “valid” 46% say “valid”
    Argument is invalid 92% say “valid” 8% say “valid”

    Almost all people continue to believe something that isn't true, even with a valid argument against it.  Also, almost half of people continue to believe that true arguments are false (if they started out not believing it).

    Oh dear, that’s not as good. Apparently, when we are presented with a strong argument that contradicts our pre-existing beliefs, we find it pretty hard to even perceive it to be a strong argument (people only did so 46% of the time). Even worse, when people are presented with a weak argument that agrees with our pre-existing biases, almost no-one can see that the argument is weak (people got that one wrong 92% of the time!).

    Overall, people did do better than a chance at compensating for their prior biases, since about 60% of people’s judgments were correct (you’d expect 50% by chance). Even so, if you were a professional “evaluator of evidence”, and someone came along and offered you a magic tool that improves your chances of making the right decision from 60% to (say) 95%, you’d probably jump at it, right? Of course you would. Thankfully, we actually do have a tool that can do this. But it’s not magic, it’s statistics. So that’s reason #1 why scientists love statistics. It’s just too easy for us humans to continue to “believe what we want to believe”; so if we want to “believe in the data” instead, we’re going to need a bit of help to keep our personal biases under control. That’s what statistics does: it helps keep us honest.

     

    Before you “like” or share news articles or posts online, try to get to get as close to the original study as you can get, and figure out what the IV, DV, sample, and population is so that you can decide if their clam is actually supported by research.  Was the IV created (not just measured)?  Does the way that the DV was measured make sense for that outcome?  Does the sample really represent the population that they are saying it does?  Science is hard to do well, but it’s the best way to learn new things about the world. 


    1  Including the suggestion that common sense is in short supply among scientists.

    Reference

    Evans, J. St. B. T., Barston, J. L., & Pollard, P.  (1983).  On the conflict between logic and belief in syllogistic reasoning.  Memory & Cognition, 11, 295-306.

    Contributors and Attributions

     


    1.6: "Research shows that..." is shared under a CC BY-SA license and was authored, remixed, and/or curated by Danielle Navarro.