Skip to main content
Statistics LibreTexts

3.5: Quantitative Analysis with SPSS- Multivariate Crosstabs

  • Page ID
    37548
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Producing a multivariate crosstabulation is exactly the same as producing a bivariate crosstabulation, except that an additional variable is added. Note that, due to the limitations of the crosstabulation approach, you are not actually looking at the relationships between all three variables simultaneously (and this approach is limited to three variables). Rather, you are looking at how controlling for a third variable—your “Layer” or control variable—changes the relationship between the independent and dependent variable in your analysis. What SPSS produces, then, is basically a stack of crosstabulation tables with your independent and dependent variables, one for each category of your control variable, along with statistical significance and association values for each category of your control variable. This chapter will review how to produce and interpret a multivariate crosstabulation. It uses variables with fairly few categories for ease of interpretation. Do note that when using variables with many categories, results can become quite complex and lengthy, and due to small numbers of cases left in each cell of the very lengthy tables, statistical significance is likely to be reduced. Thus, analysts should take care to consider whether the relationship(s) they are interested in are suitable for this type of analysis, and may want to consider recoding variables (see the chapter on data management) with many categories into somewhat fewer categories to facilitate analysis.

    To produce a multivariate crosstabulation, follow the same steps as you

    Alt+O moves to the Row box; Alt+C to the Column box; Alt+B toggles clustered bar charts; Alt+T toggles suppress tables; Alt+S opens the statistics dialog; Alt+E opens the Cells dialog; other menus and options are beyond the scope of this discussion. Tab must be used to access the layer box.
    Figure 1. Crosstabs Dialog for an Analysis for SEX as Independent Variable, HAPMAR as Dependent Variable, and DIVORCE as Control Variable

    would follow to produce a bivariate crosstabulation—put the independent variable in the columns box, the dependent variable in the rows box, select column percentages under cells, and select chi square and an appropriate measure of association under statistics. Note that the measure of association you choose should be the same one that you would choose for a bivariate analysis with the same independent and dependent variables, as the third variable is a control variable and does not alter the criteria upon which the decision about measures of association is made. The one thing you need to add in order to produce a multivariate crosstabulation is that you add your third variable, the control variable, to the Layer box in the crosstabs dialog. Figure 1 shows what this would look like for a crosstabulation with the independent variable SEX, the dependent variable HAPMAR, and the control variable DIVORCE. In other words, this analysis is exploring whether being male or female influences respondents’ feelings of happiness in their marriages, controlling for whether or not they have ever been divorced.

    Below are the tables SPSS produces for this analysis. After the tables, the text will continue, with an explanation of how one would go about interpreting these results.
    Happiness of R’s marriage * Respondent’s sex * Ever been divorced or separated Crosstabulation
    Ever been divorced or separated Respondent’s sex Total
    male female
    yes Happiness of R’s marriage very happy Count 136 133 269
    % within Respondent’s sex 57.1% 52.8% 54.9%
    pretty happy Count 96 107 203
    % within Respondent’s sex 40.3% 42.5% 41.4%
    not too happy Count 6 12 18
    % within Respondent’s sex 2.5% 4.8% 3.7%
    Total Count 238 252 490
    % within Respondent’s sex 100.0% 100.0% 100.0%
    no Happiness of R’s marriage very happy Count 467 439 906
    % within Respondent’s sex 65.6% 59.8% 62.7%
    pretty happy Count 224 260 484
    % within Respondent’s sex 31.5% 35.4% 33.5%
    not too happy Count 21 35 56
    % within Respondent’s sex 2.9% 4.8% 3.9%
    Total Count 712 734 1446
    % within Respondent’s sex 100.0% 100.0% 100.0%
    Total Happiness of R’s marriage very happy Count 603 572 1175
    % within Respondent’s sex 63.5% 58.0% 60.7%
    pretty happy Count 320 367 687
    % within Respondent’s sex 33.7% 37.2% 35.5%
    not too happy Count 27 47 74
    % within Respondent’s sex 2.8% 4.8% 3.8%
    Total Count 950 986 1936
    % within Respondent’s sex 100.0% 100.0% 100.0%
    Chi-Square Tests
    Ever been divorced or separated Value df Asymptotic Significance (2-sided)
    yes Pearson Chi-Square 2.231b 2 .328
    Likelihood Ratio 2.269 2 .322
    Linear-by-Linear Association 1.649 1 .199
    N of Valid Cases 490    
    no Pearson Chi-Square 6.710c 2 .035
    Likelihood Ratio 6.748 2 .034
    Linear-by-Linear Association 6.524 1 .011
    N of Valid Cases 1446    
    Total Pearson Chi-Square 8.772a 2 .012
    Likelihood Ratio 8.840 2 .012
    Linear-by-Linear Association 8.200 1 .004
    N of Valid Cases 1936    
    a. 0 cells (0.0%) have expected count less than 5. The minimum expected count is 36.31.
    b. 0 cells (0.0%) have expected count less than 5. The minimum expected count is 8.74.
    c. 0 cells (0.0%) have expected count less than 5. The minimum expected count is 27.57.
    Symmetric Measures
    Ever been divorced or separated Value Approximate Significance
    yes Nominal by Nominal Phi .067 .328
    Cramer’s V .067 .328
    N of Valid Cases 490  
    no Nominal by Nominal Phi .068 .035
    Cramer’s V .068 .035
    N of Valid Cases 1446  
    Total Nominal by Nominal Phi .067 .012
    Cramer’s V .067 .012
    N of Valid Cases 1936  

    First, consider the crosstabulation table. As you can see, this table really consists of three tables stacked on top of each other. Each of these three tables considers the relationship between sex and the happiness of the respondent’s marriage, but there is one table for those who have ever been divorced, one table for those who have never been divorced, and one table for everyone. Comparing the percentages across the rows, we can make the following observations:

    • Among those who have ever been divorced, males are slightly more likely to be very happy in their marriage, while females are somewhat more likely to be not too happy.
    • Among those who have not ever been divorced, males are more likely to be very happy in their marriage, while females are more likely to be pretty happy and are somewhat more likely to be not too happy.
    • Among the entire sample, males are more likely to be very happy in their marriages, while females are more likely to be pretty happy and somewhat more likely to be not too happy.
    • Overall, then, the results suggest men are happier in their marriages than women.

    Next, we turn to statistical significance. At the p<0.05 level, we can observe that this analysis produces significant results for those who have never been divorced and for the entire sample, but not for those who have been divorced. Turning to the association, we find a weak association—the figures for those who have been divorced, those who have not been divorced, and the entire population are quite similar.

    Thus, we can conclude that women who have never been divorced are, on average, less happy in their marriages than men who have never been divorced, but that among those who have been divorced, the relationship between sex and marital happiness is not statistically significant.

    Exercises

    Select three variables of interest. Answer the following questions:

    • Which is the independent variable, which is the dependent variable, and which is the control variable?
    • What is the research hypothesis for this analysis? What do you predict will be the relationship between the independent variable and the dependent variable, and how will the control variable impact this relationship?
    • What is the null hypothesis for this analysis?
    • What confidence level (p value) have you chosen?
    • Which measure of association is most appropriate for this relationship?

    Next, use SPSS to produce a multivariate crosstabulation according to the instructions in this chapter. Interpret the crosstabulation. First, answer the following questions for each of the stacked crosstabulations of your independent and dependent variable (one for each category of the control variable, plus one for everyone):

    • Is the relationship between the independent and dependent variables statistically significant?
    • Can the null hypothesis be rejected?
    • How strong is the association between the two variables?
    • Looking at that pattern of percentages across the rows, what can you determine about the nature of the relationship between the two variables?

    Then, compare your results across the different categories of the control variable.

    • What does this tell you about how the control variable impacts the relationship between the independent and dependent variables?
    • Is there support for your research hypothesis?

    Media Attributions


    This page titled 3.5: Quantitative Analysis with SPSS- Multivariate Crosstabs is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Mikaila Mariel Lemonik Arthur via source content that was edited to the style and standards of the LibreTexts platform.