Background

What is the purpose of these notes?

  1. Provide a few small examples of hypothesis tests on simulated data;
  2. Give you several lines of R code you can use in this course.

What is the format of this document?

This document was created using R Markdown. You can read more about it here and check out a cheat sheet here, which will guide you through installing RStudio, and from there the moment you create a new .Rmd document, it will be a working template to start from. If you are used to using LaTeX, no worries: it can be embedded into Markdown, with overall simpler formatting. I hope you find this useful!

Installing and loading packages

Just like every other programming language you may be familiar with, R’s capabilities can be greatly extended by installing additional “packages” and “libraries”.

To install a package, use the install.packages() command. You’ll want to run the following commands to get the necessary packages for these notes:

install.packages("tidyverse")

You only need to install packages once. Once they’re installed, you may use them by loading the libraries using the library() command. For these notes, you’ll want to run the following code

library("tidyverse")
options(scipen = 4)  # Suppresses scientific notation
library("tigerstats")

Context

As we learned in the first half of the semester, statistics are functions of random variables and therefore are random variables themselves. In particular, they have their own distributions, called sampling distributions. Our inferred knowledge about these distributions is used to estimater parameters of the model which we postulate was used to generate the data.

So far we have learned about point estimators. Next, we will learn about interval estimators. And now, we focus on hypothesis tests. In other notes, we discuss the general view of how these fit together.

You will see later in the course that hypothesis tests are very useful in the addressing the question whether the postulated model was indeed used to generate the data.

A simulated example

(What is statistical significance testing doing?)

Here’s a little simulation where we have two groups, a treatment groups and a control group. We’re going to simulate observations from both groups. We’ll run the simulation two ways.

  • First simulation (Null case): the treatment has no effect
  • Second simulation (Non-null case): the treatment on average increases outcome
set.seed(12345)
# Function to generate data
generateSimulationData <- function(n1, n2, mean.shift = 0) {
  y <- rnorm(n1 + n2) + c(rep(0, n1), rep(mean.shift, n2))
  groups <- c(rep("control", n1), rep("treatment", n2))
  data.frame(y = y, groups = groups)
}

Let’s look at a single realization in the null setting.

n1 = 30
n2 = 40
# Observation, null case
obs.data <- generateSimulationData(n1 = n1, n2 = n2)
obs.data[c(1,n1,n1+1,n2),]
             y    groups
1   0.58552882   control
30 -0.16231098   control
31  0.81187318 treatment
40  0.02580105 treatment
# Box plots
qplot(x = groups, y = y, data = obs.data, geom = "boxplot")

# Density plots
qplot(fill = groups, x = y, data = obs.data, geom = "density", 
      alpha = I(0.5),
      adjust = 1.5, 
      xlim = c(-4, 6))

# t-test
t.test(y ~ groups, data = obs.data)

    Welch Two Sample t-test

data:  y by groups
t = -0.61095, df = 67.998, p-value = 0.5433
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -0.6856053  0.3641889
sample estimates:
  mean in group control mean in group treatment 
             0.07880701              0.23951518 

And here’s what happens in a random realization in the non-null setting.

# Non-null case, very strong treatment effect
# Observation, null case
obs.data <- generateSimulationData(
  n1 = n1, n2 = n2, mean.shift = 1.5)

# Box plots
qplot(x = groups, y = y, data = obs.data, geom = "boxplot")

# Density plots
qplot(fill = groups, x = y, data = obs.data, geom = "density", 
      alpha = I(0.5),
      adjust = 1.5, 
      xlim = c(-4, 6))

# t-test
t.test(y ~ groups, data = obs.data)

    Welch Two Sample t-test

data:  y by groups
t = -4.3081, df = 64.785, p-value = 0.00005708
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -1.6911828 -0.6197985
sample estimates:
  mean in group control mean in group treatment 
              0.4191634               1.5746541 

Two small data examples

Case study 1: automobile parts

Problem.

An important manufacturing process produces cylindrical component parts for the automotive industry. It is important that the process produce parts having a mean diameter of 5.0 millimeters. The engineer involved claims that the population mean is 5.0 millimeters.

An experiment is conducted in which 100 parts produced by the process are selected randomly and the diameter measured on each. It is known that the population standard deviation is \(\sigma\)= 0.1 millimeter. The experiment indicates a sample average diameter of \(\overline{X} = 5.027\) millimeters.

Question: Does this sample information appear to support or refute the engineer’s claim?

A simple probability computation

\[ P(|\overline{X} - 5|\geq 0.027) = 2P\left(\frac{\overline{X} - 5}{0.1\sqrt{100}}\geq 2.7\right)=0.0035 = 0.007 \]

## Case study 1: using the z-value
pnormGC(3, region="above", mean=0, sd=1,graph=TRUE)

[1] 0.001349898
  • What is this probability?
    • Prob of seeing the observed data or more extreme…..
    • under some assumption about the population mean!
  • So \(0.007\) is a \(p\)-value!

\[ H_0: \mu=5.0 \mbox{ vs. } H_1: \mu\neq 5.0 \]

Case Study 2: paint drying time

Problem Two independent experiments are run in which two different types of paint are compared. 18 specimens are painted using type A, and the drying time, in hours, is recorded for each. The same is done with type B. The population standard deviations are both known to be 1.0.

Assuming that the mean drying time is equal for the two types of paint, find \(P(\overline{X}_A-\overline{X}_B>1)\), where \(\overline{X}_A\) and \(\overline{X}_B\) are average drying times for samples of size \(n_A=n_B=18\).

A simple probability computation

The probability that we compute is given by: \[ P(\overline{X}_A-\overline{X}_B> 1) = P\left(\frac{\overline{X}_A-\overline{X}_B-0}{\sqrt{1/9}}\geq \frac{1-0}{\sqrt{1/9}}\right)=P(Z>3)=0.0013. \]

## Case study 2
pnormGC(bound=c(4.973,5.027), region="outside", 
        mean=5, sd=0.1/sqrt(100),graph=TRUE)

[1] 0.006933948
  • What is this probability?
    • Prob of seeing the observed data or more extreme…..
    • under some assumption about two population means!
  • So \(0.0013\) is a \(p\)-value again!
  • \[ H_0:\mu_A-\mu_B=0 \mbox{ vs. } H_1:\mu_A-\mu_B>0.\]

Appendix

Some resources and further reading:

  • sampling distribution overview offers a low-level overview of what the topic is. I do not recommend it for PhD students, however, it may give additional examples to some of you who would like to brush up on some basic stats.
  • View a short video tutorial about the Central Limit Theorem here, and view another set of examples introducing sampling distributions.

License

This document is created for Math 563, Spring 2021, at Illinois Tech. While the course materials are generally not to be distributed outside the course without permission of the instructor, this particular set of notes is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License


  1. Sonja Petrović, Associate Professor of Applied Mathematics, College of Computing, Illinios Tech. Homepage, Email.↩︎