How to Perform One Sample t-Test in R with Practical Examples

by | Programming, R, Statistics

Introduction to One-Sample t-Tests

A one-sample t-test is a statistical method used to determine if the mean of a single sample differs significantly from a known or hypothesized population mean. It’s commonly applied in research to evaluate if a particular data set deviates from an expected norm.

When to Use a One-Sample t-Test

This test is used when we have:

  • A single sample of quantitative data.
  • A hypothesized population mean to compare against.
  • An assumption that the data are approximately normally distributed.

Performing a One-Sample t-Test in R

R makes it straightforward to perform a one-sample t-test using the t.test() function. Here’s the syntax:

t.test(x, mu = hypothesized_mean)

Where:

  • x is the sample data.
  • mu is the hypothesized population mean.

Practical Example: Testing the Mean of a Sample

Let’s walk through a practical example where we test if the average weight of a sample of apples is different from a known population mean of 150 grams.

# Sample data of apple weights in grams
apple_weights <- c(149, 152, 153, 148, 151, 150, 154, 149, 150, 148)

# Hypothesized mean (population mean)
hypothesized_mean <- 150

# Perform the one-sample t-test
result <- t.test(apple_weights, mu = hypothesized_mean)

# Display the result
result

The output provides a t-value, degrees of freedom, p-value, and confidence interval, which we can interpret to determine whether the mean weight of our apples differs significantly from 150 grams.

Interpreting Results

The output from t.test() in R will look something like this:

One Sample t-test

    data:  apple_weights
    t = 0.61237, df = 9, p-value = 0.5554
    alternative hypothesis: true mean is not equal to 150
    95 percent confidence interval:
     148.9224 151.8776
    sample estimates:
    mean of x
         150.4

To interpret these results:

  • t-value: The t-value of 0.61237 indicates that the difference between the sample mean and the hypothesized mean is not large.
  • p-value: The p-value (0.5554) is greater than the typical significance level (e.g., 0.05), meaning we fail to reject the null hypothesis. Thus, we do not have enough evidence to conclude that the sample mean is significantly different from 150 grams.
  • Confidence Interval: The 95% confidence interval [148.92, 151.88] includes the hypothesized mean of 150, further supporting that the sample mean is not significantly different from the population mean of 150.
  • Sample Mean: The sample mean is estimated to be 150.4 grams, which is close to the hypothesized mean.

Practical Example: Using Summary Statistics

If you have only the summary statistics rather than individual data points, you can still perform a one-sample t-test by calculating the t-value manually. Here’s an example using realistic summary statistics:

  • Sample Mean (\(\bar{x}\)): 68
  • Standard Deviation (\(s\)): 10
  • Sample Size (\(n\)): 25
  • Hypothesized Mean (\(\mu\)): 65

The formula for the t-value in a one-sample t-test is:

\[ t = \frac{\bar{x} - \mu}{s / \sqrt{n}} \]

Substitute the values:

# Given values
sample_mean <- 68
std_dev <- 10
sample_size <- 25
hypothesized_mean <- 65

# Calculate the t-value
t_value <- (sample_mean - hypothesized_mean) / (std_dev / sqrt(sample_size))
t_value

The output will be:

[1] 1.5

This t-value can then be compared to the critical t-value for a significance level (e.g., 0.05) and the degrees of freedom (n - 1) to determine whether to reject the null hypothesis.

Interpreting the Example

  • t-value: The calculated t-value of 1.5 suggests a moderate difference between the sample mean and the hypothesized mean.
  • Degrees of Freedom: With a sample size of 25, the degrees of freedom are 24 (n - 1).
  • Conclusion: A t-value of 1.5 with 24 degrees of freedom would not typically be significant at the 0.05 level (critical value ≈ 2.064). Therefore, we would fail to reject the null hypothesis, suggesting that the sample mean of 68 is not significantly different from the hypothesized mean of 65.

Calculating the p-value in R Given a t-score

Given a calculated t-score, we can use R to determine the p-value, which tells us the probability of observing a t-score as extreme as the one calculated, assuming the null hypothesis is true.

For a two-tailed one-sample t-test with a t-score of 1.5 and a sample size of 25, let’s calculate the p-value step-by-step:

  • t-score (t): 1.5
  • Degrees of freedom (df): n - 1 = 25 - 1 = 24

Step-by-Step Calculation in R

In R, we can use the pt() function to calculate the p-value from the t-score. The pt() function returns the cumulative probability to the left of the given t-score. For a two-tailed test, we need to double this probability for values greater than the absolute value of the t-score.

Here’s the R code to calculate the p-value for a two-tailed test:

# Given values
t_score <- 1.5
degrees_freedom <- 24

# Calculate the p-value for a two-tailed test
p_value <- 2 * (1 - pt(t_score, df = degrees_freedom))
p_value

The output will be:

[1] 0.1466556

Interpreting the p-value

The calculated p-value is 0.1467, meaning there is a 14.67% probability of observing a t-score as extreme as 1.5 if the null hypothesis is true.

  • Since the p-value (0.1467) is greater than the significance level of 0.05, we do not reject the null hypothesis.
  • This suggests that the sample mean is not significantly different from the hypothesized mean at the 0.05 level.

This approach allows us to determine the statistical significance of the observed difference using R, with just the t-score and degrees of freedom.

Assumptions of the One-Sample t-Test

Before drawing conclusions, ensure the data meets these assumptions:

  • Normality: The sample data should be approximately normally distributed. For small samples, normality can be assessed with a Q-Q plot or a Shapiro-Wilk test.
  • Independence: The data points should be independent of each other.

Conclusion

The one-sample t-test is a powerful yet straightforward tool that allows us to determine whether the mean of a sample differs significantly from a known or hypothesized population mean. This is especially useful when you want to test if your sample data aligns with an expected average. For instance, you might want to know if the average height of students in a particular school differs from a national average, or if the average time spent on an activity by a group is different from a target time.

In essence, the one-sample t-test helps us answer questions about whether observed data (our sample) is likely part of a larger population with a specific mean. It does this by calculating a t-score and a p-value:

  • t-score: This value represents the difference between the sample mean and the hypothesized mean, adjusted by the variability and size of the sample. A larger t-score (either positive or negative) suggests a greater difference from the hypothesized mean.
  • p-value: This value tells us the probability of observing a difference as extreme as the calculated t-score if the null hypothesis (that the sample mean equals the hypothesized mean) is true. A low p-value (typically less than 0.05) indicates that the observed difference is unlikely due to chance, meaning the sample mean likely does differ from the hypothesized mean.

In R, the t.test() function simplifies this entire process. By entering the sample data and the hypothesized mean, R automatically calculates the t-score, p-value, and confidence intervals, providing all necessary values for interpretation. The results from t.test() can be easily understood and used to make data-driven decisions based on statistical evidence.

Try the One Sample t-Test Calculator

To calculate the one-sample t-test for your data, check out our One-Sample t-Test Calculator on the Research Scientist Pod.

Have fun and happy researching!

Profile Picture
Senior Advisor, Data Science | [email protected] | + posts

Suf is a senior advisor in data science with deep expertise in Natural Language Processing, Complex Networks, and Anomaly Detection. Formerly a postdoctoral research fellow, he applied advanced physics techniques to tackle real-world, data-heavy industry challenges. Before that, he was a particle physicist at the ATLAS Experiment of the Large Hadron Collider. Now, he’s focused on bringing more fun and curiosity to the world of science and research online.

Buy Me a Coffee ✨