12.4: The F Distribution and the F-Ratio (2024)

  1. Last updated
  2. Save as PDF
  • Page ID
    46012
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vectorC}[1]{\textbf{#1}}\)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    The distribution used for the hypothesis test is a new one. It is called the \(F\) distribution, named after Sir Ronald Fisher, an English statistician. The \(F\) statistic is a ratio (a fraction). There are two sets of degrees of freedom; one for the numerator and one for the denominator.

    For example, if \(F\) follows an \(F\) distribution and the number of degrees of freedom for the numerator is four, and the number of degrees of freedom for the denominator is ten, then \(F \sim F_{4,10}\).

    The \(F\) distribution is derived from the Student's \(t\)-distribution. The values of the \(F\) distribution are squares of the corresponding values of the \(t\)-distribution. One-Way ANOVA expands the \(t\)-test for comparing more than two groups. The scope of that derivation is beyond the level of this course.

    To calculate the \(F\) ratio, two estimates of the variance are made.

    1. Variance between samples: An estimate of \(\sigma^{2}\) that is the variance of the sample means multiplied by \(n\) (when the sample sizes are the same.). If the samples are different sizes, the variance between samples is weighted to account for the different sample sizes. The variance is also called variation due to treatment or explained variation.
    2. Variance within samples: An estimate of \(\sigma^{2}\) that is the average of the sample variances (also known as a pooled variance). When the sample sizes are different, the variance within samples is weighted. The variance is also called the variation due to error or unexplained variation.
    • \(SS_{\text{between}} = \text{the sum of squares that represents the variation among the different samples}\)
    • \(SS_{\text{within}} = \text{the sum of squares that represents the variation within samples that is due to chance}\).

    To find a "sum of squares" means to add together squared quantities that, in some cases, may be weighted. We used sum of squares to calculate the sample variance and the sample standard deviation in discussed previously.

    \(MS\) means "mean square." \(MS_{\text{between}}\) is the variance between groups, and \(MS_{\text{within}}\) is the variance within groups.

    Calculation of Sum of Squares and Mean Square
    • \(k =\) the number of different groups
    • \(n_{j} =\) the size of the \(j^{th}\) group}
    • \(s_{j} =\) the sum of the values in the \(j^{th}\) group
    • \(n =\) total number of all the values combined (total sample size): \[n= \sum n_{j}\]
    • \(x =\) one value: \[\sum x = \sum s_{j}\]
    • Sum of squares of all values from every group combined: \[\sum x^{2}\]
    • Between group variability: \[SS_{\text{total}} = \sum x^{2} - \dfrac{\left(\sum x^{2}\right)}{n}\]
    • Total sum of squares: \[\sum x^{2} - \dfrac{\left(\sum x\right)^{2}}{n}\]
    • Explained variation: sum of squares representing variation among the different samples: \[SS_{\text{between}} = \sum \left[\dfrac{(s_{j})^{2}}{n_{j}}\right] - \dfrac{\left(\sum s_{j}\right)^{2}}{n}\]
    • Unexplained variation: sum of squares representing variation within samples due to chance: \[SS_{\text{within}} = SS_{\text{total}} - SS_{\text{between}}\]
    • \(df\)'s for different groups (\(df\)'s for the numerator): \[df = k - 1\]
    • Equation for errors within samples (\(df\)'s for the denominator): \[df_{\text{within}} = n - k\]
    • Mean square (variance estimate) explained by the different groups: \[MS_{\text{between}} = \dfrac{SS_{\text{between}}}{df_{\text{between}}}\]
    • Mean square (variance estimate) that is due to chance (unexplained): \[MS_{\text{within}} = \dfrac{SS_{\text{within}}}{df_{\text{within}}}\]

    \(MS_{\text{between}}\) and \(MS_{\text{within}}\) can be written as follows:

    • \(M S_{\text {between }}=\frac{S S_{\text {between }}}{d f_{\text {between }}}=\frac{S S_{\text {between }}}{k-1}\)
    • \(M S_{\text {within }}=\frac{S S_{\text {within }}}{d f_{\text {within }}}=\frac{S S_{\text {within }}}{n-k}\)

    The one-way ANOVA test depends on the fact that \(MS_{\text{between}}\) can be influenced by population differences among means of the several groups. Since \(MS_{\text{within}}\) compares values of each group to its own group mean, the fact that group means might be different does not affect \(MS_{\text{within}}\).

    The null hypothesis says that all groups are samples from populations having the same normal distribution. The alternate hypothesis says that at least two of the sample groups come from populations with different normal distributions. If the null hypothesis is true, \(MS_{\text{between}}\) and \(MS_{\text{within}}\) should both estimate the same value.

    The null hypothesis says that all the group population means are equal. The hypothesis of equal means implies that the populations have the same normal distribution, because it is assumed that the populations are normal and that they have equal variances.

    \(F\)-Ratio or \(F\) Statistic

    \[F = \dfrac{MS_{\text{between}}}{MS_{\text{within}}}\]

    If \(MS_{\text{between}}\) and \(MS_{\text{within}}\) estimate the same value (following the belief that \(H_{0}\) is true), then the \(F\)-ratio should be approximately equal to one. Mostly, just sampling errors would contribute to variations away from one. As it turns out, \(MS_{\text{between}}\) consists of the population variance plus a variance produced from the differences between the samples. \(MS_{\text{within}}\) is an estimate of the population variance. Since variances are always positive, if the null hypothesis is false, \(MS_{\text{between}}\) will generally be larger than \(MS_{\text{within}}\).Then the \(F\)-ratio will be larger than one. However, if the population effect is small, it is not unlikely that \(MS_{\text{within}}\) will be larger in a given sample.

    The foregoing calculations were done with groups of different sizes. If the groups are the same size, the calculations simplify somewhat and the \(F\)-ratio can be written as:

    \(F\)-Ratio Formula when the groups are the same size

    \[F = \dfrac{n \cdot s_{\bar{x}}^{2}}{s^{2}_{\text{pooled}}}\]

    where ...

    • \(n = \text{the sample size}\)
    • \(df_{\text{numerator}} = k - 1\)
    • \(df_{\text{denominator}} = n - k\)
    • \(s^{2}_{\text{pooled}} = \text{the mean of the sample variances (pooled variance)}\)
    • \(s_{\bar{x}^{2}} = \text{the variance of the sample means}\)

    Data are typically put into a table for easy viewing. One-Way ANOVA results are often displayed in this manner by computer software.

    Source of Variation Sum of Squares (\(SS\)) Degrees of Freedom (\(df\)) Mean Square (\(MS\)) \(F\)

    Factor

    (Between)

    \(SS(\text{Factor})\)

    k-1

    MS(Factor) = SS(Factor)/(k – 1)

    \(F = \dfrac{MS(\text{Factor})}{MS(\text{Error})}\)

    Error

    (Within)

    \(SS(\text{Error})\)

    n-k

    MS(Error) = SS(Error)/(nk)

    Total \(SS(\text{Total})\) n-1
    Example \(\PageIndex{1}\)

    Three different diet plans are to be tested for mean weight loss. The entries in the table are the weight losses for the different plans. The one-way ANOVA results are shown in Table.

    Plan 1: \(n_{1} = 4\) Plan 2: \(n_{2} = 3\) Plan 3: \(n_{3} = 3\)
    5 3.5 8
    4.5 7 4
    4 3.5
    3 4.5

    \[s_{1} = 16.5, s_{2} =15, s_{3} = 15.7\]

    Following are the calculations needed to fill in the one-way ANOVA table. The table is used to conduct a hypothesis test.

    where \(n_{1} = 4, n_{2} = 3, n_{3} = 3\) and \(n = n_{1} + n_{2} + n_{3} = 10\) so

    \[\begin{align} SS(\text{between}) &= \dfrac{(16.5)^{2}}{4} + \dfrac{(15)^{2}}{3} + \dfrac{(5.5)^{2}}{3} = \dfrac{(16.5 + 15 + 15.5)^{2}}{10} \\ &= 2.2458 \end{align}\]

    \[\begin{align} S(\text{total}) =& \sum x^{2} - \dfrac{\left(\sum x\right)^{2}}{n} \\ =& (5^{2} + 4.5^{2} + 4^{2} + 3^{2} + 3.5^{2} + 7^{2} + 4.5^{2} + 8^{2} + 4^{2} + 3.5^{2}) \\ &− \dfrac{(5 + 4.5 + 4 + 3 + 3.5 + 7 + 4.5 + 8 + 4 + 3.5)^{2}}{10} \\ =& 244 - \dfrac{47^{2}}{10} = 244 - 220.9 \\ =& 23.1 \end{align}\]

    \[\begin{align} SS(\text{within}) &= SS(\text{total}) - SS(\text{between}) \\ &= 23.1 - 2.2458 \\ &= 20.8542 \end{align}\]

    One-Way ANOVA Table: The formulas for \(SS(\text{Total})\), \(SS(\text{Factor}) = SS(\text{Between})\) and \(SS(\text{Error}) = SS(\text{Within})\) as shown previously. The same information is provided by the TI calculator hypothesis test function ANOVA in STAT TESTS (syntax is \(ANOVA(L1, L2, L3)\) where \(L1, L2, L3\) have the data from Plan 1, Plan 2, Plan 3 respectively).

    Source of Variation Sum of Squares (\(SS\)) Degrees of Freedom (\(df\)) Mean Square (\(MS\)) \(F\)
    Factor
    (Between)
    \(SS(\text{Factor}) = SS(\text{Between}) = 2.2458\) k – 1
    = 3 groups – 1
    = 2
    \(MS(\text{Factor}) = \dfrac{SS(\text{Factor})}{(k– 1)} = \dfrac{2.2458}{2} = 1.1229\) \(F = \dfrac{MS(\text{Factor})}{MS(\text{Error})} = \dfrac{1.1229}{2.9792} = 0.3769\)
    Error
    (Within)
    \(SS(\text{Error}) = SS(\text{Within}) = 20.8542\) nk
    = 10 total data – 3 groups
    = 7
    \(MS(\text{Error})) = \dfrac{SS(\text{Error})}{(n– k)} = \dfrac{20.8542}{7} = 2.9792\)
    Total \(SS(\text{Total}) = 2.2458 + 20.8542 = 23.1\) n – 1
    = 10 total data – 1
    = 9
    Exercise \(\PageIndex{1}\)

    As part of an experiment to see how different types of soil cover would affect slicing tomato production, Marist College students grew tomato plants under different soil cover conditions. Groups of three plants each had one of the following treatments

    • bare soil
    • a commercial ground cover
    • black plastic
    • straw
    • compost

    All plants grew under the same conditions and were the same variety. Students recorded the weight (in grams) of tomatoes produced by each of the \(n = 15\) plants:

    Bare: \(n_{1} = 3\) Ground Cover: \(n_{2} = 3\) Plastic: \(n_{3} = 3\) Straw: \(n_{4} = 3\) Compost: \(n_{5} = 3\)
    2,625 5,348 6,583 7,285 6,277
    2,997 5,682 8,560 6,897 7,818
    4,915 5,482 3,830 9,230 8,677

    Create the one-way ANOVA table.

    Answer

    Enter the data into lists L1, L2, L3, L4 and L5. Press STAT and arrow over to TESTS. Arrow down to ANOVA. Press ENTER and enter L1, L2, L3, L4, L5). Press ENTER. The table was filled in with the results from the calculator.

    One-Way ANOVA table
    Source of Variation Sum of Squares (\(SS\)) Degrees of Freedom (\(df\)) Mean Square (\(MS\)) \(F\)
    Factor (Between) 36,648,561 \(\dfrac{36,648,561}{4} = 9,162,140\) \(\dfrac{9,162,140}{2,044,672.6} = 4.4810\)
    Error (Within) 20,446,726 \(\dfrac{20,446,726}{10} = 2,044,672.6\)
    Total 57,095,287

    The one-way ANOVA hypothesis test is always right-tailed because larger \(F\)-values are way out in the right tail of the \(F\)-distribution curve and tend to make us reject \(H_{0}\).

    12.4:  The F Distribution and the F-Ratio (2024)

    FAQs

    Is the F-ratio the same as the F-distribution? ›

    In probability theory and statistics, the F-distribution or F-ratio, also known as Snedecor's F distribution or the Fisher–Snedecor distribution (after Ronald Fisher and George W.

    How do you calculate the F-ratio? ›

    The formula reads: F equals the Mean Square of the between group divided by the Mean Square of the within group.

    What is the formula for the F-test ratio? ›

    We calculate the F-ratio by dividing the Mean of Squares Between (MSB) by the Mean of Squares Within (MSW). The calculated F-ratio is then compared to the F-value obtained from an F-table with the corresponding alpha.

    What is a good F-ratio value? ›

    If the null hypothesis is true, you expect F to have a value close to 1.0 most of the time.

    Is a higher F ratio better? ›

    Focal Ratio

    The smaller the f/number, the lower the magnification, the wider the field, and the brighter the image with any given eyepiece or camera. Fast f/4 to f/5 focal ratios are generally best for lower power wide field observing and deep space photography.

    How to calculate f distribution? ›

    F-Distribution Formula

    The formula to calculate the F-statistic, or F-value, is: F = σ 1 σ 2 , or Variance 1/Variance 2. In order to accommodate the skewed right shape of the F-distribution, the larger variance is placed in the numerator and the smaller variance is used in the denominator.

    What is good a F ratio? ›

    The stoichiometric mixture for a gasoline engine is the ideal ratio of air to fuel that burns all fuel with no excess air. For gasoline fuel, the stoichiometric air–fuel mixture is about 14.7:1 i.e. for every one gram of fuel, 14.7 grams of air are required.

    What does the F ratio tell us in linear regression? ›

    The F-ratio, which follows the F-distribution, is the test statistic to assess the statistical significance of the overall model. It tests the hypothesis that the variation explained by regression model is more than the variation explained by the average value (ȳ).

    What does an F ratio of 1 mean? ›

    A value of F=1 means that no matter what significance level we use for the test, we will conclude that the two variances are equal.

    What does the F-test F ratio measure? ›

    The F-statistic

    The F -ratio test is a one-tailed test as it determines whether the numerator is bigger than denominator.

    What is the difference between the F ratio and the t test? ›

    The t-test is used to compare the means of two groups and determine if they are significantly different, while the F-test is used to compare variances of two or more groups and assess if they are significantly different.

    How to calculate F ratio in two-way ANOVA? ›

    F ratio. Each F ratio is computed by dividing the MS value by another MS value. The MS value for the denominator depends on the experimental design. For two-way ANOVA with no repeated measures: The denominator MS value is always the MSresidual.

    What is a big F ratio? ›

    The F value is calculated as the ratio of the between-group variance to the within-group variance. A large F value indicates that the differences in group means are substantially greater than the variability within each group, suggesting that the observed differences are unlikely to be due to chance alone.

    What is a low F ratio? ›

    Simply put, this is an indication of the light gathering ability and the speed of the telescope. The smaller the F-ratio, the more light gathering ability the telescope has or the faster the optic is and the brighter the target image will be for a specific exposure time.

    What does the F value tell you? ›

    The F value is a value on the F distribution. Various statistical tests generate an F value. The value can be used to determine whether the test is statistically significant. The F value is used in analysis of variance (ANOVA).

    What is the F-statistic also called an F ratio? ›

    F-Ratio or F Statistic

    MSwithin is an estimate of the population variance. Since variances are always positive, if the null hypothesis is false, MSbetween will generally be larger than MSwithin. Then the F-ratio will be larger than one.

    What is the sampling distribution of F ratio? ›

    The F-distribution is the sampling distribution of the ratio of the variances of two samples drawn from a normal population. It is used directly to test to see if two samples come from populations with the same variance.

    Are F and T distributions the same? ›

    The F-Distribution and T-Distribution are Actually the Same

    But we found that in the tipping study, whether we use t or F, the p-value comes out exactly the same (. 0762). The reason is that fundamentally, the F-distribution and the t-distribution are actually one and the same!

    What is the typical distribution of F ratios? ›

    Shape of the Distribution: The F-distribution is not symmetrical. It is skewed to the right (positively skewed). This skewness originates from the fact that while variances are bounded by zero on the left (they can't be negative), they have no upper bound on the right, leading to a long tail towards higher values.

    References

    Top Articles
    Latest Posts
    Article information

    Author: Manual Maggio

    Last Updated:

    Views: 6593

    Rating: 4.9 / 5 (49 voted)

    Reviews: 88% of readers found this page helpful

    Author information

    Name: Manual Maggio

    Birthday: 1998-01-20

    Address: 359 Kelvin Stream, Lake Eldonview, MT 33517-1242

    Phone: +577037762465

    Job: Product Hospitality Supervisor

    Hobby: Gardening, Web surfing, Video gaming, Amateur radio, Flag Football, Reading, Table tennis

    Introduction: My name is Manual Maggio, I am a thankful, tender, adventurous, delightful, fantastic, proud, graceful person who loves writing and wants to share my knowledge and understanding with you.