Skip to main content
Statistics LibreTexts

1: Chapters

  • Page ID
    52261
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    • 1.1: Introduction to Crime Data Analysis, R and RStudio
      Data analysis in the criminal justice system extends beyond policing, playing a crucial role in investigations, prevention strategies, and broader public safety initiatives. Crime analysts support law enforcement by uncovering trends, aiding investigations, and optimizing resource allocation. They also contribute to correctional settings and court proceedings by identifying security risks and assessing rehabilitation programs' effectiveness.
    • 1.2: Introduction to Data Formations and Graphics
      This chapter focuses on transforming survey data from the 2012 General Social Survey (GSS) and visualizing it using the R programming language. The survey targets the American population's attitudes towards police use of force. The chapter explains how to import the data using the haven package, transform categorical variables correctly, and employ the dplyr package for data manipulation. Additionally, it introduces the ggplot2 package for creating graphical representations of the data.
    • 1.3: Creating a New Variable and Producing Summary Statistics
      This chapter focuses on creating new variables and producing summary statistics using the 2018 Part 1 crime data from Pennsylvania's Uniform Crime Report. It explains the distinction between Part 1 (serious) and Part 2 (less serious) crimes and the limitations of the UCR due to underreporting. The chapter guides how to manipulate data in R, including renaming variables, calculating crime rates, and categorizing cities by population.
    • 1.4: Central Tendency and Variability
      The page explains the concepts of central tendency and variability, key statistical measures that summarize and describe datasets. Central tendency includes mean, median, and mode, while variability includes variance and standard deviation. The page also provides a guide on using R to compute these measures using the Gapminder dataset, showcasing steps to load and manipulate the data, and calculate these statistics.
    • 1.5: Reliability of a Scale
      The discussion focuses on the concepts of reliability and validity within criminal justice research, emphasizing the importance of consistent and accurate measurement of abstract social constructs like societal disadvantage. Reliability refers to consistency in measurements, while validity concerns the accuracy of these measures. The chapter uses the National Crime Victimization Survey (NCVS) as a case study.
    • 1.6: Chi-Squared Test
      The text outlines the process of hypothesis testing, particularly focusing on the null hypothesis significance testing (NHST) approach and the chi-squared test. Hypothesis testing involves comparing empirical data against a null hypothesis, which researchers aim to reject to support an alternative hypothesis. The chi-squared test is used to compare categorical variables, as demonstrated by examining credit card usage differences between genders.
    • 1.7: T-Test
      This page introduces the t-test, a statistical tool used to compare the means of two groups, and explains its use in examining differences in continuous data. Two types of t-tests are discussed: independent-samples t-test, for comparing distinct groups, and paired-samples t-test, for comparing the same group over time. The context used involves assessing the impact of Cognitive Behavioral Therapy (CBT) programs on inmates' antisocial attitudes.
    • 1.8: Analysis of Variance
      The page introduces ANOVA, a statistical method for analyzing differences among group means, as an extension of t-tests. The focus is on a study examining media exposure's impact on perceptions of police through a one-way ANOVA. Participants were exposed to different police-related videos and their confidence in the police was assessed. The ANOVA results indicated no significant difference in confidence across video conditions.
    • 1.9: Correlation
      This page provides an introduction to correlation, focusing on the Pearson product-moment correlation coefficient, which measures the linear relationship between two variables. It clarifies the misconception that correlation does not imply causation, explaining that while necessary, correlation alone is not sufficient for causation. The text elaborates on calculating and interpreting Pearson's r, using the USArrests dataset as an example.
    • 1.10: Linear Regression
      This page provides an introduction to regression analysis, highlighting its relationship with correlation analysis. Regression is a statistical method used to understand and predict the relationship between dependent and independent variables. The chapter focuses on simple and multiple linear regression, explained through an inmate survey study assessing the impact of low self-control and age on risky lifestyles.


    This page titled 1: Chapters is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Jaeyong Choi (The Pennsylvania Alliance for Design of Open Textbooks (PA-ADOPT)) via source content that was edited to the style and standards of the LibreTexts platform.