Skip to main content
Statistics LibreTexts

7.5: Deep Learning

  • Page ID
    3586
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Nowadays, “deep learning” is a bit of buzzword which used to designate software packages including multiple classification methods, and among the always some complicated neural networks (multi-layered, recurrent etc.) In that sense, R with necessary packages is a deep learning system. What is missed (actually, not), is a common interface to all “animals” in this zoo of methods. Package mlr was created to unify the learning interface in R:

    Code \(\PageIndex{1}\) (R):

    library(mlr)
    ...
    ## 1) Define the task
    ## Specify the type of analysis (e.g. classification)
    ## and provide data and response variable
    task <- makeClassifTask(data=iris, target="Species")
    ## 2) Define the learner, use listLearners()[,1]
    ## Choose a specific algorithm
    lrn <- makeLearner("classif.ctree")
    n = nrow(iris)
    train.set <- sample(n, size=2/3*n)
    test.set <- setdiff(1:n, train.set)
    ## 3) Fit the model
    ## Train the learner on the task using a random subset
    ## of the data as training set
    model <- train(lrn, task, subset=train.set)
    ## 4) Make predictions
    ## Predict values of the response variable for new
    ## observations by the trained model
    ## using the other part of the data as test set
    pred <- predict(model, task=task, subset=test.set)
    ## 5) Evaluate the learner
    ## Calculate the mean misclassification error and accuracy
    performance(pred, measures=list(mmce, acc))

    In addition, R now has interfaces (ways to connect with) to (almost) all famous “deep learning” software systems, namely TensorFlow, H2O, Keras, Caffe and MXNet.


    This page titled 7.5: Deep Learning is shared under a Public Domain license and was authored, remixed, and/or curated by Alexey Shipunov via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.