When to use chi-square test vs t-test

Both t-tests and chi-square tests are statistical tests, designed to test, and possibly reject, a null hypothesis. The null hypothesis is usually a statement that something is zero, or that something does not exist. For example, you could test the hypothesis that the difference between two means is zero, or you could test the hypothesis that there is no relationship between two variables.

Null Hypothesis Tested

A t-test tests a null hypothesis about two means; most often, it tests the hypothesis that two means are equal, or that the difference between them is zero. For example, we could test whether boys and girls in fourth grade have the same average height.

A chi-square test tests a null hypothesis about the relationship between two variables. For example, you could test the hypothesis that men and women are equally likely to vote "Democratic," "Republican," "Other" or "not at all."

Types of Data

A t-test requires two variables; one must be categorical and have exactly two levels, and the other must be quantitative and be estimable by a mean. For example, the two groups could be Republicans and Democrats, and the quantitative variable could be age.

A chi-square test requires categorical variables, usually only two, but each may have any number of levels. For example, the variables could be ethnic group — White, Black, Asian, American Indian/Alaskan native, Native Hawaiian/Pacific Islander, other, multiracial; and presidential choice in 2008 — (Obama, McCain, other, did not vote).

Variations

There are variations of the t-test to cover paired data; for example, husbands and wives, or right and left eyes. There are variations of the chi-square to deal with ordinal data — that is, data that has an order, such as "none," "a little," "some," "a lot" — and to deal with more than two variables.

Conclusions

The t-test allows you to say either "we can reject the null hypothesis of equal means at the 0.05 level" or "we have insufficient evidence to reject the null of equal means at the 0.05 level." A chi-square test allows you to say either "we can reject the null hypothesis of no relationship at the 0.05 level" or "we have insufficient evidence to reject the null at the 0.05 level."

For a person without a background in stats, it can be difficult to understand the difference between fundamental statistical tests (not to mention when to use them). Here are the differences between the most common tests, how to use null value hypotheses in these tests and the conditions under which you should use each particular test.

  1. Z-Test
  2. T-Test
  3. Chi-Square Test
  4. ANOVA

Before we learn about the tests, let’s dive into some key terms. 

Defining Our Terms 

Null Hypothesis and Hypothesis Testing

Before we venture into the differences between common statistical tests, we need to formulate a clear understanding of a null hypothesis. 

 

The null hypothesis proposes that no significant difference exists between a set of given observations.

In other words:

  • Null: Two sample means are equal.
  • Alternate: Two sample means are not equal.

To reject a null hypothesis, one needs to calculate test statistics, then compare the result with the critical value. If the test statistic is greater than the critical value, we can reject the null hypothesis. 

More From Built In Experts4 Probability Distributions Every Data Scientist Needs to Know

Critical Value

A critical value is a point (or points) on the scale of the test statistic beyond which we reject the null hypothesis. We derive the level of significance (α) of the test. 

Critical value can tell us the probability of two sample means belonging to the same distribution. The higher the critical value means the lower the probability of two samples belonging to the same distribution. 

The general critical value for a two-tailed test is 1.96, which is based on the fact that 95 percent of the area of a normal distribution is within 1.96 standard deviations of the mean.

Critical values can be used to do hypothesis testing in the following ways:

  • Calculate test statistic.
  • Calculate critical values based on significance level alpha.
  • Compare the test statistic with critical values.

If the test statistic is lower than the critical value, accept the null hypothesis; otherwise reject it. 

Learn more about calculating a critical value:

Critical value (z*) for a given confidence level

Note: Some statisticians would use p-value instead of critical value for conducting null hypothesis.

Sample vs. Population

In statistics, population refers to the total set of observations we can make. For example, if we want to calculate the average human height, the population will be the total number of people actually present on Earth.

A sample, on the other hand, is a set of data collected or selected from a predefined procedure. For our example above, a sample is a small group of people selected randomly from different regions of the globe. 

To draw inferences from a sample and validate a hypothesis,  the sample must be random.

For instance, if we select people randomly from all regions on Earth, we can assume our  sample mean is close to the population mean, whereas if we make a selection just from the United States, then our average height estimate/sample mean cannot be considered close to the population mean. Instead, it will only represent the data of a particular region (the United States). That means our sample is biased and is not representative of the population.

More on Biased DataWhat's Wrong With Your Statistical Model? Skewed Data.

Distribution

Another important statistical concept to understand is distribution. When the population is infinitely large, it’s not feasible to validate any hypothesis by calculating the mean value or test parameters on the entire population. In such cases, we assume a population is some type of a distribution.

While there are many forms of distribution, the most common are binomial, Poisson and discrete. 

You must determine the distribution type to calculate the critical value and decide on the best test to validate any hypothesis.

Now that we’re clear on population, sample and distribution, let’s learn about different kinds of tests and the distribution types for which they are used.

Poisson Piqued Your Interest?The Poisson Process and Poisson Distribution, Explained (With Meteors!)

Statistical Tests

P-value, Critical Value and Test Statistic

As we know, critical value is the point beyond which we reject the null hypothesis. P-value, on the other hand, is the probability to the right of the respective statistic (z, t or chi). The benefit of using p-value is that it calculates a probability estimate, which means we can test at any desired level of significance by comparing this probability directly with the significance level.

For example, assume the z-value for a particular experiment comes out to be 1.67 which is greater than the critical value at five percent (1.64). Now, to check for a different significance level of one percent, we calculate a new critical value.

However, if we calculate p-value for 1.67 and it comes to be 0.047, we can use this p-value to reject the hypothesis at a five percent significance level since 0.047 < 0.05. However, with a more stringent significance level of one percent, we’ll fail to reject the hypothesis since 0.047 > 0.01. It’s important to note here that there’s no double calculation required.

Learn about the t-test, the chi square test the p-value and more

Z-Test

In a z-test, we assume the sample is normally distributed. A z-score is calculated with population parameters such as population mean and population standard deviation. We use this test to validate a hypothesis that states the sample belongs to the same population.

  • Null: Sample mean is same as the population mean.
  • Alternate: Sample mean is not same as the population mean.

The statistic used for this hypothesis testing is called z-statistic, the score for which we calculate as:

z = (x — μ) / (σ / √n), where

x=sample mean

μ=population mean

σ / √n= population standard deviation

If the test statistic is lower than the critical value, accept the hypothesis.

T-Test

We use a t-test to compare the mean of two given samples. Like a z-test, a t-test also assumes a normal distribution of the sample. When we don’t know the population parameters (mean and standard deviation), we use t-test.

The Three Versions of a T-Test

  1. Independent sample t-test: compares mean for two groups
  2. Paired sample t-test: compares means from the same group at different times
  3. One sample t-test: tests the mean of a single group against a known mean

The statistic for this hypothesis testing is called t-statistic, the score for which we calculate as:

t=(x1 — x2) / (σ / √n1 + σ / √n2), where

x1=mean of sample 1

x2=mean of sample 2

n1=sample size 1

n2=sample size 2

There are multiple variations of the t-test. 

Note: This article focuses on normally distributed data. You can use z-tests and t-tests for data which is non-normally distributed as well if the sample size is greater than 20, however there are other preferable methods to use in such a situation.

Chi-Square Test

We use the chi-square test to compare categorical variables.

The Two Types of Chi-Square Test

  1. Goodness of fit test: determines if a sample matches the population
  2. A chi-square fit test for two independent variables: used to compare two variables in a contingency table to check if the data fits

A small chi-square value means that data fits.

A large chi-square value means that data doesn’t fit.

The hypothesis we’re testing is:

  • Null: Variable A and Variable B are independent.
  • Alternate: Variable A and Variable B are not independent.

The statistic used to measure significance, in this case, is called chi-square statistic. The formula we use to calculate the statistic is:

Χ2 = Σ [ (Or,c — Er,c)2 / Er,c ] where

Or,c=observed frequency count at level r of Variable A and level c of Variable B

Er,c=expected frequency count at level r of Variable A and level c of Variable B

T-Test vs. Chi-Square

We use a t-test to compare the mean of two given samples but we use the chi-square test to compare categorical variables.

Built In Tutorials for Data ScientistsA Primer on Model Fitting

ANOVA

We use analysis of variance (ANOVA) to compare three or more samples with a single test. 

The Two Major Types of ANOVA

  1. One-way ANOVA: Used to compare the difference between three or more samples/groups of a single independent variable.
  2. MANOVA: Allows us to test the effect of one or more independent variables on two or more dependent variables. In addition, MANOVA can also detect the difference in correlation between dependent variables given the groups of independent variables.

The hypothesis we’re testing with ANOVA is:

  • Null: All pairs of samples are the same (i.e. all sample means are equal).
  • Alternate: At least one pair of samples is significantly different.

The statistics used to measure the significance in this case are F-statistics. We calculate the F-value using the formula:

F= ((SSE1 — SSE2)/m)/ SSE2/n-k, where

SSE=residual sum of squares

m=number of restrictions

k=number of independent variables

There are multiple tools available such as SPSS, R packages, Excel etc. to carry out ANOVA on a given sample.

The Takeaway

If you learn only one thing from this article, let it be this: In all of these tests we’re comparing a statistic with a critical value to accept or reject a hypothesis. However, the statistic and the way to calculate it differ depending on the type of variable, the number of samples you’re analyzing and whether or not we know the population parameters. We can thus choose a suitable statistical test and null hypothesis. This principle is instrumental to understanding these basic statistical concepts.

Should I use t

a t-test is to simply look at the types of variables you are working with. If you have two variables that are both categorical, i.e. they can be placed in categories like male, female and republican, democrat, independent, then you should use a chi-square test.

When would a Chi

Market researchers use the Chi-Square test when they find themselves in one of the following situations: They need to estimate how closely an observed distribution matches an expected distribution. This is referred to as a “goodness-of-fit” test. They need to estimate whether two random variables are independent.

When should a chi

As a rule of thumb, Chi- Square should not be used if more than 20% of the expected frequencies have a value of less than 5 (it does not matter what the observed frequencies are).

Why would you use a Chi

A chi-square test is a statistical test used to compare observed results with expected results. The purpose of this test is to determine if a difference between observed data and expected data is due to chance, or if it is due to a relationship between the variables you are studying.