Learning Objectives

A Working Example

Assertions

Unit Tests

Expectation

  • An expectation returns an error if a function or result is not what you expect.

  • In {testthat} all expectations begin with expect_.

  • The first argument is the actual result of a function in your package. The second argument is the expected result.

  • The most common expectation is to test for equality with expect_equal().

    x <- 10
    y <- 10
    expect_equal(x, y)

    You can specify the tolerance level so for items that are only approximately equal

    expect_equal(10, 10 + 10^-8)
    expect_equal(10, 10 + 10^-5)
    ## Error: 10 (`actual`) not equal to 10 + 10^-5 (`expected`).
    ## 
    ##   `actual`: 10.00000
    ## `expected`: 10.00001
    expect_equal(10, 10 + 10^-5, tolerance = 10^-4)
  • Make sure to only check for equality between two things. If you provide three unnamed arguments, the third one is interpreted as the tolerance. This is a common error that I have done many times.

    ## Bad
    expect_equal(10, 10, 10)
    ## Warning: Unused arguments (10)
    ## Because this will also run OK (tolerance = 10)
    expect_equal(10, 2, 10)
    ## Warning: Unused arguments (10)
    ## Error: 10 (`actual`) not equal to 2 (`expected`).
    ## 
    ##   `actual`: 10
    ## `expected`:  2
  • Use ignore_attr = TRUE if your objects have different attributes and you just care about the numeric values (default expect_equal() will throw an error):

    local_edition(3) ## not necessary for package
    names(x) <- "hello"
    expect_equal(x, y)
    ## Error: `x` (`actual`) not equal to `y` (`expected`).
    ## 
    ## `names(actual)` is a character vector ('hello')
    ## `names(expected)` is absent
    expect_equal(x, y, ignore_attr = TRUE)
  • The local_edition(3) code makes it so my code chunks use the most recent {testthat} functions. You don’t need to worry about that in your package. {usethis} will automatically assume the third edition. You can explicitly use the third edition by adding the following to your DESCRIPTION file:

    Config/testthat/edition: 3
  • expect_match() checks for a regular expression match.

    expect_match("hello", "ll")
    expect_match("helo", "ll")
    ## Error: "helo" does not match "ll".
    ## Actual value: "helo"
  • You can use expect_warning() and expect_error() to check that your functions error correctly.

    simreg <- function(n, x, beta0, beta1, sigma2) {
      ## Check input
      stopifnot(length(x) == n)
    
      ## Simulate y
      eps <- stats::rnorm(n = n, mean = 0, sd = sqrt(sigma2))
      y <- beta0 + beta1 * x + eps
      return(y)
    }
    
    x <- runif(100)
    beta0 <- 0
    beta1 <- 2
    sigma2 <- 0.5
    expect_error(simreg(n = 1, 
                        x = x, 
                        beta0 = beta0, 
                        beta1 = beta1, 
                        sigma2 = sigma2))
  • It is recommended that you think harder about your unit tests, but you can just test for a non-error by using expect_error() and setting regexp = NA

    expect_error(simreg(n = length(x), 
                        x = x, 
                        beta0 = beta0, 
                        beta1 = beta1, 
                        sigma2 = sigma2), 
                 regexp = NA)
  • expect_type() is tests for the type of the output ("double", "integer", "character", "logical", or "list").

    local_edition(3)
    expect_type(1, "double")
    expect_type(1L, "integer")
    expect_type("1", "character")
    expect_type(TRUE, "logical")
  • expect_s3_class() is used to test for the class of the object (e.g. "data.frame", "matrix", "tibble", "lm", etc..)

    y <- simreg(n = length(x), x = x, beta0 = beta0, beta1 = beta1, sigma2 = sigma2)
    lmout <- lm(y ~ x)
    expect_s3_class(object = lmout, class = "lm")
  • expect_true() acts like stopifnot() except for unit tests instead of assertions.

    expect_true(3 == 3)
  • Example: A common test for a simulation script is to see if estimators that we expect to work well on average do, indeed, work well on average. In the case of the simple linear regression model, we will check that, for large \(n\), the OLS estimates are reasonably close to the true value of \(\beta_1\)

    x <- runif(1000000)
    beta0 <- 0
    beta1 <- 2
    sigma2 <- 0.5
    y <- simreg(n = length(x), x = x, beta0 = beta0, beta1 = beta1, sigma2 = sigma2)
    lmout <- lm(y ~ x)
    expect_equal(coef(lmout)[[2]], beta1, tolerance = 0.01)
  • Exercise: Write an expectation that the output is a numeric vector.

Test

  • Expectations go inside tests.

  • All {testthat} tests are of the form

    test_that("Human Readable Description", {
      ## Code running test
    })
  • The first argument is a human-readable and informative description of what the test is accomplishing.

  • The second argument is is an expression where you put code.

    • An expression is a multi-lined block of R code surrounded by curly braces {}.
  • Let’s put out a couple expectations in our test for simreg().

    test_that("simreg() output is consistent", {
      set.seed(991)
      x <- runif(1000000)
      beta0 <- 0
      beta1 <- 2
      sigma2 <- 0.5
      y <- simreg(n = length(x), x = x, beta0 = beta0, beta1 = beta1, sigma2 = sigma2)
      lmout <- lm(y ~ x)
      expect_equal(coef(lmout)[[2]], beta1, tolerance = 0.001)
    
      expect_equal(length(x), length(y))
    })
    ## Test passed 🥇
  • Notice that I put two expectations in the same test. This is all connected to the output of simreg(), so it makes sense to put them in the same test.

  • Whenever a test generates something randomly, I like to set a seed for reproducibility.

    • A random seed initializes the pseudorandom process. So any “random draws” in R will be the same if you set the same seed via set.seed().
  • If you find yourself printing stuff in the console when writing code, try writing a test instead.

  • I usually have a unit test open at the same time that I am coding a function.

  • Try to test a function in only one file. If you do change something and need to update your tests, that will make it easier to update.

Testthat File

  • A testthat file is just an R script that holds a few related tests.

  • You can create an R script for unit testing by typing

    usethis::use_test()

    specifying the name of the R script.

  • You should choose a one or two-word name (separated by dashes -) that describes the collection of tests. E.g.

    usethis::use_test("sim-code")
  • Exercise: Edit regsim() so that x is either a vector or NULL. If NULL, then your function should simulate x from a standard normal distribution. You can check if a value is NULL via is.null(). The function should then return a list of length two with the simulated x and y values. Create a new unit test for this new behavior.

  • Exercise: In simreg(), if x is provided, then n is not really needed since it can be inferred from x. Set the default of n to be NULL and only require it if x is not provided. Give a warning if both x and n are provided, and throw an error if both x and n are NULL. Write a unit test to check all of these new behaviors. Note: It is typical (and good practice) to put all arguments with defaults after all arguments without defaults.

Test Coverage