Skip to main content
Statistics LibreTexts

29.1: Testing the Value of a Single Mean (Section 28.1)

  • Page ID
    8870
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    In this example, we will show multiple ways to test a hypothesis about the value of a single mean. As an example, let’s test whether the mean systolic blood pressure (BP) in the NHANES dataset (averaged over the three measurements that were taken for each person) is greater than 120 mm, which is the standard value for normal systolic BP.

    First let’s perform a power analysis to see how large our sample would need to be in order to detect a small difference (Cohen’s d = .2).

    pwr.result <- pwr.t.test(d=0.2, power=0.8, 
               type='one.sample', 
               alternative='greater')
    pwr.result
    ## 
    ##      One-sample t test power calculation 
    ## 
    ##               n = 156
    ##               d = 0.2
    ##       sig.level = 0.05
    ##           power = 0.8
    ##     alternative = greater

    Based on this, we take a sample of 156 individuals from the dataset.

    NHANES_BP_sample <- NHANES_adult %>%
      drop_na(BPSysAve) %>%
      dplyr::select(BPSysAve) %>%
      sample_n(pwr.result$n)
    
    print('Mean BP:')
    ## [1] "Mean BP:"
    meanBP <- NHANES_BP_sample %>%
      summarize(meanBP=mean(BPSysAve)) %>%
      pull()
    meanBP
    ## [1] 123

    First let’s perform a sign test to see whether the observed mean of 123.11 is significantly different from zero. To do this, we count the number of values that are greater than the hypothesized mean, and then use a binomial test to ask how surprising that number is if the true proportion is 0.5 (as it would be if the distribution were centered at the hypothesized mean).

    NHANES_BP_sample <- NHANES_BP_sample %>%
      mutate(BPover120=BPSysAve>120)
    
    nOver120 <- NHANES_BP_sample %>%
      summarize(nOver120=sum(BPover120)) %>%
      pull()
    
    binom.test(nOver120, nrow(NHANES_BP_sample), alternative='greater')
    ## 
    ##  Exact binomial test
    ## 
    ## data:  nOver120 and nrow(NHANES_BP_sample)
    ## number of successes = 84, number of trials = 155, p-value = 0.2
    ## alternative hypothesis: true probability of success is greater than 0.5
    ## 95 percent confidence interval:
    ##  0.47 1.00
    ## sample estimates:
    ## probability of success 
    ##                   0.54

    This shows no significant difference. Next let’s perform a one-sample t-test:

    t.test(NHANES_BP_sample$BPSysAve, mu=120, alternative='greater')
    ## 
    ##  One Sample t-test
    ## 
    ## data:  NHANES_BP_sample$BPSysAve
    ## t = 2, df = 154, p-value = 0.01
    ## alternative hypothesis: true mean is greater than 120
    ## 95 percent confidence interval:
    ##  121 Inf
    ## sample estimates:
    ## mean of x 
    ##       123

    Here we see that the difference is not statistically signficant. Finally, we can perform a randomization test to test the hypothesis. Under the null hypothesis we would expect roughly half of the differences from the expected mean to be positive and half to be negative (assuming the distribution is centered around the mean), so we can cause the null hypothesis to be true on average by randomly flipping the signs of the differences.

    nruns = 5000
    
    # create a function to compute the 
    # t on the shuffled values 
    shuffleOneSample <- function(x,mu) {
      # randomly flip signs
      flip <- runif(length(x))>0.5
      diff <- x - mu
      diff[flip]=-1*diff[flip]
      # compute and return correlation 
      # with shuffled variable
      return(tibble(meanDiff=mean(diff)))
    }
    
    index_df <- tibble(id=seq(nruns)) %>%
      group_by(id)
    
    shuffle_results <- index_df %>%
      do(shuffleOneSample(NHANES_BP_sample$BPSysAve,120))
    
    observed_diff <- mean(NHANES_BP_sample$BPSysAve-120)
    p_shuffle <- mean(shuffle_results$meanDiff>observed_diff)
    p_shuffle
    ## [1] 0.014

    This gives us a very similar p value to the one observed with the standard t-test.

    We might also want to quantify the degree of evidence in favor of the null hypothesis, which we can do using the Bayes Factor:

    ttestBF(NHANES_BP_sample$BPSysAve,
            mu=120,  
            nullInterval = c(-Inf, 0))
    ## Bayes factor analysis
    ## --------------
    ## [1] Alt., r=0.707 -Inf<d<0    : 0.029 ±0.29%
    ## [2] Alt., r=0.707 !(-Inf<d<0) : 1.8   ±0%
    ## 
    ## Against denominator:
    ##   Null, mu = 120 
    ## ---
    ## Bayes factor type: BFoneSample, JZS

    This tells us that our result doesn’t provide particularly strong evidence for either the null or alternative hypothesis; that is, it’s inconclusive.


    This page titled 29.1: Testing the Value of a Single Mean (Section 28.1) is shared under a not declared license and was authored, remixed, and/or curated by Russell A. Poldrack via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.