Skip to main content
Statistics LibreTexts

5.1.1a: The Additive Model (No Interaction)

  • Page ID
    33637
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    In a factorial design, we first look at the interactions for significance. In the case where interaction is not significant, then we can drop the interaction term from our model, and we end up with an additive model.

    For a two-factor factorial, the model we initially consider (as we have discussed in Section 5.1) is: \[Y_{ij} = \mu_{..} + \alpha_{i} + \beta_{j} + (\alpha \beta)_{ij} + \epsilon_{ijk}\]

    Note that the interaction term, \((\alpha \beta)_{ij}\), is a multiplicative term.

    If the interaction is found to be non-significant, then the model reduces to: \[Y_{ij} = \mu_{..} + \alpha_{i} + \beta_{j} + \epsilon_{ijk}\] Here we can see that the response variable is simply a function of adding the effects of the two factors.

    Example \(\PageIndex{1}\): Glucose in Blood Serum

    As an example, (adapted from Kuehl, 2000), let's look at a study designed to evaluate two chemical methods used for assaying the amount of glucose in blood serum. A large volume of blood serum served as a starting point for the experiment. The blood serum was divided into three portions, each of which was 'doped' or augmented by adding an additional amount of glucose. Three doping levels were used. Samples of the doped serum were then assayed for glucose concentration by one of two chemical methods. This type of ‘doping’ experiment is commonly used to compare the sensitivity of assay methods.

    The amount of glucose detected in each sample was recorded and is presented in the table below.

      Chemical Assay Method
    Method 1 Method 2
    Doping Level 1 2 3 1 2 3
    46.5 138.4 180.9 39.8 132.4 176.8
    47.3 144.4 180.5 40.3 132.4 173.6
    46.9 142.7 183 41.2 130.3 174.9
    Solution

    The model was run as a two-factor factorial and produced the following results:

    Type 3 Analysis of Variance
    Source DF Sum of Squares Mean Square Expected Mean Square Error Term Error DF F Value Pr > F
    method 1 263.733889 263.733889 Var(Residual) + Q(method, method*doping) MS(Residual) 12 98.35 <.0001
    doping 2 57026 28513 Var(Residual) + Q(doping, method*doping) MS(Residual) 12 10632.5 <.0001
    method*doping 2 13.821111 6.910556 Var(Residual) + Q(method*doping) MS(Residual) 12 2.58 0.1172
    Residual 12 32.180000 2.681667 Var(Residual)        

    Here we can see that the interaction of method*doping was not significant (p-value > 0.05) at a 5% level. We drop the interaction effect from the model and run the additive model. The resulting ANOVA table is:

    The Mixed Procedure
    Type 3 Analysis of Variance
    Source DF Sum of Squares Mean Square Expected Mean Square Error Term Error DF F Value Pr > F
    method 1 263.733889 263.733889 Var(Residual)+Q(method, method) MS(Residual) 14 80.26 <.0001
    doping 2 57026 28513 Var(Residual) + Q(doping,doping) MS(Residual) 14 8677.63 <.0001
    1Residual 14 46.001111 3.285794 Var(Residual)        

    The Error SS is now 46.001, which is the sum of the interaction SS and the error SS of the model with the interaction. The df values were also added the same way. This example shows that any term not included in the model gets added into the error term, which may erroneously inflate the error especially if the impact of excluded term on the response is not negligible.

    The Error SS is now 46.001, which is the sum of the interaction SS and the error SS of the model with the interaction. The df values were also added the same way. This example shows that any term not included in the model gets added into the error term, which may erroneously inflate the error especially if the impact of excluded term on the response is not negligible.

    method Least Squares Means
    method Estimate Standard Error DF t Value Pr >|t| Alpha Lower Upper
    1 123.40 0.6042 14 204.23 <.0001 0.05 122.10 124.70
    2 115.74 0.6042 14 191.56 <.0001 0.05 114.45 117.04
    Glucose Tukey grouping for LS-means of method. Method 1 is covered by a single blue bar, and method 2 is covered by a single red bar.
    Figure \(\PageIndex{1}\): Glucose Tukey grouping for LS-Means of method.
    doping Least Squares Means
    Doping Estimate Standard Error DF t Value Pr >|t| Alpha Lower Upper
    1 43.67 0.7400 14 59.01 <.0001 0.05 42.08 45.25
    2 136.77 0.7400 14 184.81 <.0001 0.05 135.18 138.35
    3 178.28 0.7400 14 240.92 <.0001 0.05 176.70 179.87

    Here, we can see that the response variable, the amount of glucose detected in a sample, is the overall mean PLUS the effect of the method used PLUS the effect of the glucose amount added to the original sample. (Hence, the additive nature of this model!)


    This page titled 5.1.1a: The Additive Model (No Interaction) is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Penn State's Department of Statistics.