Fork me on GitHub

Conducting Assessments and Surveys with Shiny

February 22, 2016
Categorized as: R, R-Bloggers

This post describes a framework for using Shiny for conducting, grading, and providing feedback for assessments. This framework supports any multiple choice format including multiple choice tests or Likert type surveys. A demo is available at or can be run locally as a Github Gist:


Key features of this framework include:


Let’s walk through the statistics assessment example. The first step is to define the multiple choice items, here defined in a CSV file.

> math.items <- read.csv('items.csv', stringsAsFactors=FALSE)
> names(math.items)
[1] "Item"   "Stem"   "Answer" "A"      "B"      "C"      "D"      "E"     

We will also define a function that will be called when the user completes the assessment. This function needs to have one parameter named results. This parameter is a character vector of the user responses. The values are either NA if there was no response, or the column name of the item.choices defined below (here A through E). In this example, the results will be stored in a reactiveValues object so that the UI will refresh with new results.

assmt.results <- reactiveValues(
	math = logical(),
	mass = integer(),
	reading = logical()

saveResults <- function(results) {
	assmt.results$math <- results == math.items$Answer

Next, we create an assessment by calling the ShinyAssessment function.

test <- ShinyAssessment(input, output, session,
		name = 'Statistics',
		item.stems = math.items$Stem,
		item.choices = math.items[,c(4:8)],
		callback = saveResults,
		start.label = 'Start the Statistics Assessment',
		itemsPerPage = 1,
		inline = FALSE)

The first three parameters, input, output, and session are simply passed from shinyServer. The other parameters you can set are:

Users start an assessment with a link or button using uiOutput(test$ or uiOutput(test$, respectively.

In order for the assessment to take over the entire user interface, the UI must be built on the server side in the server.R file. In this case, the UI resides in the output$ui object:

output$ui <- renderUI({
	if(SHOW_ASSESSMENT$show) { # The assessment will take over the entire page.
		fluidPage(width = 12, uiOutput(SHOW_ASSESSMENT$assessment))
	} else { 
		# This is the normal Shiny UI code here.

As a result, the ui.r script has only one line of code.

shinyUI(fluidPage( uiOutput('ui') ))

This is one of two limitations of this approach. The other limitation is the creation of the SHOW_ASSESSMENT object. In order for the UI to know to show the assessment, a global variable must be set (i.e. SHOW_ASSESSMENT$show). To accomplish this, the ShinyAssessment function creates and sets the value of an object in the calling environment. This is generally considered bad practice (Note: if you know of another approach to avoid this behavior, please let me know in the comments below). Multiple assessments are supported as subsequent calls to ShinyAssessment first look to see if the SHOW_ASSESSMENT object has been created.


It is up to the developer to define the callback function is to score and save results. There are a lot of R packages that support databases including RODB, RMySQL, ROracle, RJDBC, rsqlite, and RPostgreSQL). Be sure to check out Dean Attali’s article about persisting data storage in Shiny apps, especially if you plan to deploy to

I have also modified Huidong Tian’s R script for adding user authentication to the open source version of Shiny to allow for users to create accounts. With authenticated user accounts users can retrieve their assessment results across different sessions. The source code is here:

This function represents the first version of an assessment framework for Shiny. Since this is in place that might be useful for other Shiny users, especially those using R and teaching, I wanted to share to get feedback and suggestions on improvement. For instance, currently this function only supports a fixed number of items presented in predefined order. In the future, this function will be modified to utilize IRT models and allow for computer adaptive testing.

comments powered by Disqus

= Github page; = RSS XML Feed; = External website; = Portable Document File (PDF)
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Creative Commons License
Formulas powered by MathJax