How does virtual learning impact students in higher education? – Brookings Institution
Guidance for the Brookings community and the public on our response to the coronavirus (COVID-19) »
Learn more from Brookings scholars about the global response to coronavirus (COVID-19) »
In 2020, the pandemic pushed millions of college students around the world into virtual learning. As the new academic year begins, many colleges in the U.S. are poised to bring students back to campus, but a large amount of uncertainty remains. Some institutions will undoubtedly continue to offer online or hybrid classes, even as in-person instruction resumes. At the same time, low vaccination rates, new coronavirus variants, and travel restrictions for international students may mean a return to fully online instruction for some U.S. students and many more around the world.
Public attention has largely focused on the learning losses of K-12 students who shifted online during the pandemic. Yet, we may have reason to be concerned about postsecondary students too. What can we expect from the move to virtual learning? How does virtual learning impact student outcomes? And how does it compare to in-person instruction at the postsecondary level?
Several new papers shed light on these issues, building on previous work in higher education and assessing the efficacy of online education in new contexts. The results are generally consistent with past research: Online coursework generally yields worse student performance than in-person coursework. The negative effects of online course-taking are particularly pronounced for less-academically prepared students and for students pursuing bachelor’s degrees. New evidence from 2020 also suggests that the switch to online course-taking in the pandemic led to declines in course completion. However, a few new studies point to some positive effects of online learning, too. This post discusses this new evidence and its implications for the upcoming academic year.
A number of studies have assessed online versus in-person learning at the college level in recent years. A key concern in this literature is that students typically self-select into online or in-person programs or courses, confounding estimates of student outcomes. That is, differences in the characteristics of students themselves may drive differences in the outcome measures we observe that are unrelated to the mode of instruction. In addition, the content, instructor, assignments, and other course features might differ across online and in-person modes as well, which makes apples-to-apples comparisons difficult.
The most compelling studies of online education draw on a random assignment design (i.e., randomized control trial or RCT) to isolate the causal effect of online versus in-person learning. Several pathbreaking studies were able to estimate causal impacts of performance on final exams or course grades in recent years. Virtually all of these studies found that online instruction resulted in lower student performance relative to in-person instruction; although in one case, students with hybrid instruction performed similarly to their in-person peers. Negative effects of online course-taking were particularly pronounced for males and less-academically prepared students.
A new paper by Kofoed and co-authors adds to this literature looking specifically at online learning during the COVID-19 pandemic in a novel context: the U.S. Military Academy at West Point. When many colleges moved classes completely online or let students choose their own mode of instruction at the start of the pandemic, West Point economics professors arranged to randomly assign students to in-person or online modes of learning. The same instructors taught one online and one in-person economics class each, and all materials, exams, and assignments were otherwise identical, minimizing biases that otherwise stand in the way of true comparisons. They find that online education lowered a student’s final grade by about 0.2 standard deviations. Their work also confirms the results of previous papers, finding that the negative effect of online learning was driven by students with lower academic ability. A follow-up survey of students’ experiences suggests that online students had trouble concentrating on their coursework and felt less connected to both their peers and instructors relative to their in-person peers.
Cacault et al. (2021) also use an RCT to assess the effects of online lectures in a Swiss university. The authors find that having access to a live-streamed lecture in addition to an in-person option improves the achievement of high-ability students, but lowers the achievement of low-ability students. The key to understanding this two-pronged effect is the counterfactual: When streamed lectures substitute for no attendance (e.g., if a student is ill), they can help students, but when streaming lectures substitute for in-person attendance, they can hurt students.
One drawback of RCTs is that these studies are typically limited to a single college and often a single course within that college, so it is not clear if the results generalize to other contexts. Several papers in the literature draw on larger samples of students in non-randomized settings and mitigate selection problems with various econometric methods. These papers find common themes: Students in online courses generally get lower grades, are less likely to perform well in follow-on coursework, and are less likely to graduate than similar students taking in-person classes.
In a recent paper, my co-author Hernando Grueso and I add to this strand of the literature, expanding it to a very different context. We draw on data from the country of Colombia, where students take a mandatory exit exam when they graduate. Using these data, we can assess test scores as an outcome, rather than (more subjective) course grades used in other studies. We can also assess performance across a wide range of institutions, degree programs, and majors.
We find that bachelor’s degree students in online programs perform worse on nearly all test score measures—including math, reading, writing, and English—relative to their counterparts in similar on-campus programs. Results for shorter technical certificates, however, are more mixed. While online students perform significantly worse than on-campus students on exit exams in private institutions, they perform better in SENA, the main public vocational institution in the country, suggesting substantial heterogeneity across institutions in the quality of online programming. Interviews with SENA staff indicate that SENA’s approach of synchronous learning and real-world projects may be working for some online students, but we cannot definitively call this causal evidence, particularly because we can only observe the students who graduate.
A new working paper by Fischer et al. pushes beyond near-term outcomes, like grades and scores, to consider longer-term outcomes, like graduation and time-to-degree, for bachelor’s degree-seeking students in a large public university in California. They find reason to be optimistic about online coursework: When students take courses required for their major online, they are more likely to graduate in four years and see a small decrease in time-to-degree relative to students taking the requirements in-person.
On the other hand, new work considering course completion during the pandemic is less promising. Looking at student outcomes in spring 2020 in Virginia’s community college system, Bird et al. find that the switch to online instruction resulted in an 8.5% reduction in course completion. They find that both withdrawals and failures rose. They also confirm findings in the literature that negative impacts are more extreme among less-academically-prepared students.
Much more research on virtual learning will undoubtedly be forthcoming post-pandemic. For now, college professors and administrators should consider that college students pushed online may be less prepared for future follow-on classes, their GPAs may be lower, course completion may suffer, and overall learning may have declined relative to in-person cohorts in previous years. These results seem particularly problematic for students with less academic preparation and those in bachelor’s degree programs.
The research is less clear on the impact of virtual instruction on college completion. Although course completion rates appear to be lower for online courses relative to in-person, the evidence is mixed on the impact of virtual instruction on graduation and time-to-degree. The negative learning impacts, reduced course completion, and lack of connection with other students and faculty in a virtual environment could ultimately reduce college completion rates. On the other hand, there is also evidence that the availability of online classes may allow students to move through their degree requirement more quickly.
As the fall semester approaches, colleges will need to make critical choices about online, hybrid, and in-person course offerings. Maintaining some of the most successful online courses will enhance flexibility at this uncertain time and allow some students to continue to make progress on their degrees if they get sick or cannot return to campus for other reasons. For those transitioning back to campus, administrators might consider additional in-person programming, review sessions, tutoring, and other enhanced supports as students make up for learning losses associated with the virtual instruction of the past year.
Brown Center Chalkboard
The Brown Center Chalkboard launched in January 2013 as a weekly series of new analyses of policy, research, and practice relevant to U.S. education.
In July 2015, the Chalkboard was re-launched as a Brookings blog in order to offer more frequent, timely, and diverse content. Contributors to both the original paper series and current blog are committed to bringing evidence to bear on the debates around education policy in America.
Read papers in the original Brown Center Chalkboard series »