A look at student performance during the COVID-19 pandemic

Joseph Cavanaugh (Wright State University – Lake Campus, Celina, Ohio, USA)
Stephen Jacquemin (Wright State University – Lake Campus, Celina, Ohio, USA)
Christine Junker (Wright State University – Lake Campus, Celina, Ohio, USA)

Quality Assurance in Education

ISSN: 0968-4883

Article publication date: 20 June 2022

Issue publication date: 10 January 2023

1895

Abstract

Purpose

This study aims to use self-reported publicly available student assessment data from the time period when there was an abrupt change in instructional method at the start of COVID-19 to assess potential for differences as a result of course delivery mode.

Design/methodology/approach

A general linear model using 837 student evaluations from 191 US public higher education institutions investigates the impact on student performance and how performance was related to a number of covariates, namely, online experience of instructor prior to shut downs, discipline of study and size of institution.

Findings

The analysis finds an overall grade point average (GPA) increase of 0.10 (out of 4.0) associated with the shift away from face-to-face instruction. In exploring potential covariates, only institutional size was significant in explaining this increase in GPA. This supports the notion that despite hardships inherent with the abrupt switch to online education across the country that student grades as a whole did not suffer.

Research limitations/implications

The source of data was self-reported. In addition, GPA is an imperfect measure of student learning. Despite this, because GPA is highly correlated with student satisfaction, retention and matriculation, it is relevant.

Practical implications

This study suggests that the rapid transition to online instruction did not negatively impact student performance and may have marginally increased these marks. These findings were cross-disciplinary and not influenced by the instructor’s prior online teaching experience. These findings support the idea that institutions and instructors should be more willing to use a variety of delivery modes going forward.

Originality/value

The data set used is uniquely large and varied in the number of institutions, professors, students and discipline. The COVID-19-induced transition from largely in-class instruction to mostly online or remote instruction allowed for a natural experiment that eliminates the sample selection problem associated with most other instructional method comparison studies.

Keywords

Citation

Cavanaugh, J., Jacquemin, S. and Junker, C. (2023), "A look at student performance during the COVID-19 pandemic", Quality Assurance in Education, Vol. 31 No. 1, pp. 33-43. https://doi.org/10.1108/QAE-01-2022-0008

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Emerald Publishing Limited


Introduction

Many factors affect student performance in higher education. One particularly controversial issue is how the format of instruction plays a role. Although the distinction between online and in-class instruction is increasingly less clear (giving rise to blended learning as a name for the popular combination of these two methods), most students and faculty recognize that there are significant differences in teaching approaches between in-class and online or remote instruction methods. While many studies have explored the impact of instructional method on student performance and learning (Bernard, 2004; Russell, 2016; Tratnik, 2019), the forced conversion to online instruction during the COVID-19 pandemic makes this issue particularly relevant and important, as universities across the country decide how best to teach students going forward. The COVID-19 pandemic also provides a unique natural experiment that allows for the direct comparison of unbiased student performance in in-class vs online courses. As many students who would not have opted into an online class were suddenly forced to do so, this situation offers a less biased comparison between in-class courses (prior to COVID-19) and online courses (during COVID-19). Given the abrupt and recent timing of this transition, few studies have attempted to measure the impact this has event had on student performance. In addition, other studies comparing in-class to online performance have been limited by the sample used. Typically, the data is limited to a specific institution, course of study or instructor. The data used in this study was obtained from RateMyProfessors.com, which provides a robust sample of different institutions, course subjects, instructors and geographical locations. This study directly investigates US higher education student course performance during the beginning of the COVID-19 pandemic and provides a broader perspective on how instructional method impacts student performance.

The objective of this study was to assess the relationship between students’ self-reported grades during the COVID-19 pandemic in Spring 2020. More specifically, this study uses a large-scale data set to disentangle whether differences in student grades varied by institution size (2-year vs 4-year), academic discipline (Business, Engineering and Mathematics, Humanities, Natural Sciences, Social Sciences), instructor experience teaching online courses (No, Yes) or any combination of the above.

Literature review

Sudden transition to remote/online learning

The onset of the pandemic brought about an abrupt switch to online/remote instruction for students and faculty for the vast majority of US higher education courses. Lederman (2021) reports that the COVID-19 pandemic brought about a 93% increase in distance education enrollment, and Entangled Solutions provides a list that delineates how over 4,200 US higher education institutions were impacted by COVID-19 (www.entangled.solutions/coronavirus-he/). For those students that were already enrolled in online courses, the transition to online/remote learning would have represented near zero change, but for those students enrolled in face-to-face courses, this shift resulted to changes to everything from delivery mode to assessment procedures. Articles that have been published thus far on the effects of COVID-19 on learning have focused on faculty and student responses, as well as the achievement of learning outcomes during the shift to online and remote learning (Shim and Lee, 2020; Guo, 2020; Akram et al., 2021; Bergiel et al., 2021; Mamutovic, 2021; Otts et al., 2021). These studies highlighted instructor and class expectations being reduced to accommodate this sudden shift. In addition, these works highlight that there may be a clear distinction between student performance and student learning. However, these recent works have not yet looked at cross institutional grade-based learning outcomes.

Higher performance in online courses

A considerable number of studies compare the performance of students learning in an online format to those learning in an in-class format. Studies that show a correlation between online instruction and higher grades tend to look at one subject area or institution, with students who self-select into either in-class or online instruction. For example, Sokout and Usagawa (2021) investigated performance in four courses taught in-class compared to six courses taught using some degree of online instruction. Students taught using online instruction performed significantly better as measured by their final scores. Soffer and Nachmias (2018) compared three courses taught online and in-class and found that student grades are higher in online courses with no difference in completion rates. Extending this, Bolliger and Halupa (2018) looked at students taking courses at three private universities in largely healthcare-related disciplines and reported that students perceive more progress towards learning goals and success taking online courses. Dutton et al. (2001) investigated students taking an online version of a computer language course and found that online students had higher performance outcomes than students taking the lecture version, with no significant difference in dropout rate after controlling for effort and maturity. A number of compilation studies also find higher performance for online course formats. Tamim et al. (2011) evaluated 25 meta-studies that included a variety of computer-based teaching methods (such as hybrid, blended and Web-based instruction) and found a small, but significantly higher, student performance for technology-enhanced teaching over traditional teaching. Nguyen (2015) found that 92% of comparison studies show that online/remote education is at least as effective as traditional education. However, the selection bias and lack of rigorous methodology of many of these studies leads to questions about the meaningfulness of these findings.

Higher performance in in-class courses

Other studies, however, have suggested that in-class courses are correlated with higher grades and higher retention rates compared to online courses. For example, Tratnik et al. (2019) found that students in a business English class report learning more in an in-class setting, compared to an online setting. Similarly, Bir (2019) finds academic performance is higher in an in-class engineering course compared to an online engineering course. Faux and Black-Hughes (2000) found higher performance and student satisfaction with a social work history course taken in-class. Likewise, Hurlbut (2018) found higher performance for students taking a teacher education course in-class versus those taking an online section. Specific to the recent online pandemic shift, Guo (2020) found a decrease in performance in remote instruction during COVID-19 in a physics course compared to an in-class version of the course. These concerns raise significant questions about retention rates, as higher student performance relates positively to satisfaction and matriculation. Numerous studies have also found retention rates are higher for in-class courses compared to online courses (Park and Choi, 2009; Terry, 2001; Yang, 2017).

No difference in performance

While some studies find that in-class is superior or vice versa, the number of studies finding no difference in performance comparing online and in-class instruction is overwhelming. For example, Russell (1999) suggests, based on an analysis of 355 papers, that student grades are similar in online and in-class instruction. This book led to the online database site https://detaresearch.org/research-support/no-significant-difference/ where researchers are invited to post their findings. Hundreds of papers report no difference (or positive performance) for online compared to in-class courses. “What’s the Difference? A report from the Institute for Higher Education Policy (1999) found that the vast majority of research has found similar learning outcomes of students taking online compared to in-class courses. Specifically, online students have similar grades or test scores as classroom-based instruction. However, the report also notes that the results of much of this research are likely questionable due to a number of underlying problems, including an emphasis on individual courses (subjects, student populations, institutions), small samples and non-randomly selected subjects. Examples of single course comparisons include Bergeler and Read (2021), who found that students taking a physics course performed equally well in the online compared to the in-class sections. The students also reported greater satisfaction with the online version of the course. Similarly, Jafar and Sitther (2021) compared performance in an Introductory Anatomy and Physiology course taught both in-class and hybrid online formats. Student performance in the two formats was not statistically different, but students evaluated the hybrid version more positively. Leshchenko and Bezlutska (2021) found that younger students (less than 30 years of age) in a maritime training course reported superior learning outcomes when taking the course online while older students (over 30 years of age) performed better in traditional in-class learning. Kalpokaite and Radivojevic (2020) found performance outcomes were similar for students taking either the online or the in-class formats of a quantitative data analysis course. In a holistic look at the literature investigating differences in performance, Castro and Tumibay (2021) used a meta-analysis of 50 articles and found that the efficiency and effectiveness of online learning is at least as effective as in-class learning. They note that there are pros and cons to each method, and it is critical to offer well-designed and effectively delivered online courses to ensure high student performance.

Sample selectivity, student performance and student evaluations

Sample selectivity may underpin a good deal of past research, as students tend to self-select into either in-class or online courses. As a result, differences may be due to correlation rather than causation. For example, documented higher performances in online courses may not be due to the course format, but rather because higher performing students are more likely to enroll in online courses. In other words, if “A” students are more likely to take online courses, then this may explain the higher student performance of online courses. Some studies have recognized this issue. Although not directly accounting for this problem, Shea and Bidjerano (2014) do consider it. Their study used a national survey of over 16,000 students to look at student completion rates for students who took online courses their freshman year. They found that students taking online courses are more likely to complete their degree. Since their study also found that students who took online courses were less academically prepared than students taking in-class courses, their finding is viewed as not suffering from a self-selection or sample selectivity bias (at least to the degree that any self-selection bias would result in higher measured performance in the online courses). There are also a few studies that have randomly assigned students to sections. These studies have generally not found significant differences between student performance. Unfortunately, they use small sample sizes of fewer than 40 students and/or are specific to a particular subject (Bowen et al., 2012; Mentzer et al., 2007; Olson, 2002; Poirier and Feldman, 2004). There are also statistical techniques that can be used to account for the different characteristics’ students might have.

Methodology

In this study, student performance data came from Rate My Professors student evaluations. Rate My Professors provides reviews of over 1.3 million professors at 7,000 US schools. RateMyProfessors.com is a freely available source of student evaluation data that has been used in a number of studies (Rehbock et al., 2021; Lee and Deale, 2019; Chiu et al., 2019; Silva et al., 2008). For this study, over 190 public higher education institutions were randomly selected, and over 830 student evaluations were used for this study. Because only US schools are included in Rate My Professors, institutions outside of the USA were not included in this study. See Cavanaugh et al. (2022) for a full description of the tabular data set, analytical methodologies and discussion regarding the often problematic nature of self-reported grades, evaluations, etc. from any source, including RateMyProfessors. Though there are likely incorrect grades reported by some students, there is no reason to assume that the student evaluations would be more or less biased as a result of the shift to remote learning during the COVID-19 pandemic. For this reason, it is assumed that any change in the student evaluations over this time period would be due to the change from in-class to online formats resulting from the pandemic. The student evaluations posted to the RateMyProfessors.com website were used to compare student grades in courses taken pre-COVID-19 to those same courses taken during COVID-19. Specifically, the student “performance” is reported by individual students answering the question “Grade Received.” To be included in this study, evaluations were needed from instructors teaching courses that were taught both prior to and during the pandemic. Prior to the pandemic is defined as the two years prior to March 2020 and during the pandemic after May 2020 until December 2020. This May 2020 date corresponds to the timing of when US higher education was largely shifted online due to the rapid spread of the virus. This included several years of pre-pandemic data for the prior pandemic evaluation and approximately one semester of during the pandemic data. To avoid a bias regarding time, ratings older than a few years were not included. Note that the course grade was converted from a letter grade to a numerical scale where 4 is an A and 0 is an F. This subsequent descriptive information was noted using institutional characteristics available online, as well as by searching keywords from past student evaluations (to gauge whether the instructor had previously taught online).

Grades were compared using a general linear model (calculated as course grade post May 2020 minus course grade pre-March 2020) to assess whether pandemic related online shifts were responsible for grade changes or were dependent on academic discipline (Business, Engineering and Mathematics, Humanities, Natural Sciences, Social Sciences), institution size (2-year, 4-year), instructor experience teaching online courses (No, Yes) and or any two-way interactions. This general data collection, as well as statistical approach, addresses the basic question of whether the sudden switch to online/remote learning as a result of the pandemic affected student performance as measured by grades. This approach was taken to provide a linear and easily comparable pathway to understanding this variation across a large university scale. General linear modeling analyses were conducted in the base stats package of the freely available and open source statistical platform R (R Core Team, 2019).

Results

The data set included 518 observations total representing 191 different schools over a three-month period spanning November to January 2020–2021. Herein, positive and negative changes in mean values indicate higher scores and lower scores, respectively, as a result of teaching format change. Specifically, across these observations, an overall increase of 0.10 (out of 4.0) grade point average (GPA) units (3%, SD 1.24) in reported course grades associated with the sudden shift to online teaching was detected in the data set. Further general linear modeling of variation in grades pre to post COVID-19 found that of all the predictors factored in, the only significant effect across the data set was institutional size. Specifically, two-year institutions exhibited a mean change of only 0.02 units compared with four-year institutions which exhibited an uptick of 0.20 units (Table 1, Figure 1). Note that a GPA difference of “0” is no change, whereas positive values and negative values indicate higher scores and lower scores, respectively. No effects were detected related to discipline, instructor experience with online teaching prior to COVID-19, or any interactions among variables.

Discussion

This study focuses on performance of students during the COVID-19 pandemic and particularly on how grades differed between courses taught by current online instructors compared to instructors new to teaching online. Also of interest was how performance differed among disciplines and/or two-year vs four-year institutions. A very small statistical increase in grades correlated with the switch to online learning. It seems likely that this does not reflect the quality of face-to-face or online learning but rather is a reflection of the circumstances.

The literature on the efficacy of in-class versus face-to-face learning is somewhat inconclusive, with some researchers finding increased performance in in-class learning and other researchers finding increased performance in online learning. That there is little difference in student performance based on modality is more convincing, which is in keeping with our findings.

The pandemic itself likely led to increased stress, isolation, and uncertainty for both students and instructors. And while typically increased stress would lead to lower performance, the pandemic may have also provided a situation wherein students may have had more time to dedicate to their coursework in the absence of extracurricular activities, social engagements and outside employment. Furthering this notion may also be the fact that online learning is often more convenient than physically attending class. In addition, many instructors may have worried that the courses would inherently become more difficult by dint of being online and may have compensated by reducing assignments or the difficulty level of the course. Instructors might have put in additional efforts in teaching to make up for this difficult situation and to compensate students for their own inexperience teaching online. It is possible that the performance increase that is measured in this study is due to instructors not wanting to adversely impact student grades during this emergency transition and, whether intentionally or not, providing some leniency when assigning final grades.

Lastly, it is also possible that performance increased because there are benefits to online instruction over the in-class format, and many students were taking their first course in this format. This is consistent with students reporting that the switch to online/remote learning during the pandemic improved their academic performance both by aiding in the development of digital competencies and by helping them to better organize daily time and activities (Mamutovic, 2021). Students also reported that they received more individual support from instructors when taking online courses during the pandemic (Brown et al., 2022). Performance improvements from online instruction also mimic what has happened in the private sector. According to the Conference Boards (Erickson et al., 2022), firms throughout the country found that worker productivity remained the same or increased after the transition to work from home was mandated. Employer’s concern about working from home has been largely dispelled. Not surprising, many of these firms are providing employees with the option to work from home after the pandemic. The Conference Board reports that 88% of firms plan on hiring virtual workers going forward. Prior to COVID-19, about 8% of the US workforce worked primarily from home and after COVID-19 it is estimated 20%–50% of workers will work primarily from home.

Of the factors that were focused on in this study, only the institution (2-year vs 4-year) was found to significantly impact performance. Although student performance increased for both two- and four-year institutions, the increase at four-year institutions was ten times larger than that of two-year institutions (an increase of 0.2 compared to 0.02). There are a number of potential causes for this discrepancy. Perhaps four-year schools were able to provide a different level of institutional support during this abrupt transition; it is also possible that performance increased more dramatically because there were more students “newly” taking online courses at four-year institutions compared to two-year institutions (Levanon et al., 2003). If online courses provide benefits to student performance over in-class, then this is also consistent with a larger increase in performance of four-year over two-year institutions. Because a larger percentage of students at two-year institutions experienced the performance increase from online instruction prior to COVID-19. Another explanation could be that two-year faculty members were more accustomed to offering online courses and had a better sense of how to negotiate the transition without impacting the overall difficulty of the courses.

Limitations and further research

This study included two- and four-year US public higher education institutions, so the results provided in this study may not be applicable for other institutions or countries. This study was made possible by the unique situation brought on by the pandemic. As such, it casts doubts on how applicable these findings are going forward. It is, for example, possible that any positive performance change was due to a one-time effect due to a novel change in the teaching/learning situation. This study used self-reported student data. The students who voluntarily provided this information may not be representative of the overall population of students. Some instructors/institutions did not alter their teaching during this time period, so for these cases any change in student performance would be due to factors other than a rapid transition to remote learning. Future studies could build upon this work by including private institutions and institutions from additional countries. The focus of this paper is on the impact of COVID-19 on student performance; other important future research might include how the pandemic altered teaching approaches, effected enrollment or changed attitudes (students, faculty, administration) concerning remote/online learning.

Conclusion

This study provides a unique opportunity to investigate this issue without a sample selectivity bias (which would happen if the best students tend to take online courses) and uses a large sample that widely represent the US higher education population. The results suggest that student grades did not decline (and may have marginally increased) increased during the pandemic, which supports the literature finding higher or no performance difference between instructional methods. Although there are undoubtedly many reasons for this increase, it is consistent with students benefiting (or at least not suffering) from online or remote instruction compared to in-class instruction. In addition, this study finds that there is a larger increase in performance from four-year institutions compared to two-year institutions. This is consistent with the potential of online instruction to improve performance, as a larger percentage of students newly took online courses at four-year institutions during the pandemic. The fact that student performance did not decline during this period is both noteworthy and a testament to the efforts of faculty and students who persevered in this difficult time.

Figures

GPA differences (mean with confidence intervals) arranged by variable

Figure 1.

GPA differences (mean with confidence intervals) arranged by variable

Model results predicting grade variation as a result of COVID-19 online transition

Term DF SS MS F P
Institution size (2-year vs 4-year) 1 6.1 6.1 3.95 0.04
Academic discipline 4 9.4 2.4 1.53 0.19
Online experience of instructor 1 0.1 0.1 0.05 0.82
Size * Discipline 4 7.3 1.8 1.19 0.32
Size * Online experience 1 0.4 0.4 0.26 0.61
Discipline * Online experience 4 4.9 1.2 0.79 0.53
Error 498 769 1.5
Total 517 798

References

Akram, H., Aslam, S., Saleem, A. and Parveen, K. (2021), “The challenges of online teaching in covid-19 pandemic: a case study of public universities in Karachi, Pakistan”, Journal Of Information Technology Education: Research, Vol. 20, pp. 263-282.

Bergiel, B.J., Bergiel, E.B. and Bergiel, B.J. (2021), “Covid-19 forced faculty to move from teaching face-to-face to online teaching fast: what are the advantages and disadvantages to faculty and students?”, International Journal of Education Research, Vol. 16 No. 1, pp. 81-96.

Bergeler, E. and Read, M. (2021), “Comparing learning outcomes and satisfaction of an online algebra-based physics course with a face-to-face course”, Journal Of Science Education And Technology, Vol. 30 No. 1, pp. 97-111, doi: 10.1007/s10956-020-09878-w.

Bernard, R., Abrami, P., Lou, Y., Borokhovski, E., Wade, A., Wozney, L. and Huang, B. (2004), “How does distance education compare with classroom instruction? A meta-analysis of the empirical literature”, Review Of Educational Research, Vol. 74 No. 3, pp. 379-439.

Bir, D. (2019), “Comparison of academic performance of students in online vs traditional engineering course”, European Journal Of Open, Distance And E-Learning, Vol. 22 No. 1, pp. 1-13, doi: 10.2478/eurodl-2019-0001.

Bolliger, D. and Halupa, C. (2018), “Online student perceptions of engagement, transactional distance, and outcomes”, Distance Education, Vol. 39 No. 3, pp. 299-316.

Bowen, W.G., Lack, K.A., Chingos, M. and Nygren, T.I. (2012), “Interactive learning online at public universities: evidence from randomized trials”, Ithaka S+R, doi: 10.18665/sr.22464.

Brown, T., Robinson, L., Gledhill, K., Yu, M., Isbel, S., Greber, C., Parsons, D. and Etherington, J. (2022), “‘Learning in and out of lockdown’: a comparison of two groups of undergraduate occupational therapy students’ engagement in online‐only and blended education approaches during the COVID‐19 pandemic”, Australian Occupational Therapy Journal, Vol. 69 No. 3, pp. 1-15, doi: 10.1111/1440-1630.12793.

Castro, M. and Tumibay, G. (2021), “A literature review: efficacy of online learning courses for higher education institution using meta-analysis”, Education And Information Technologies, Vol. 26 No. 2, pp. 1367-1385, doi: 10.1007/s10639-019-10027-z.

Cavanaugh, J., Jacquemin, S. and Junker, C. (2022), “Variation in student perceptions of higher education course quality and difficulty as a result of widespread implementation of online education during the COVID-19 pandemic”, Technology, Knowledge And Learning, doi: 10.1007/s10758-022-09596-9.

Chiu, Y.-L., Chen, K.-H., Hsu, Y.-T. and Wang, J.-N. (2019), “Understanding the perceived quality of professors’ teaching effectiveness in various disciplines: the moderating effects of teaching at top colleges”, Assessment And Evaluation In Higher Education, Vol. 44 No. 3, pp. 449-462, doi: 10.1080/02602938.2018.1520193.

Dutton, J., Dutton, M. and Perry, J. (2001), “Do online students perform as well as lecture students?”, Journal Of Engineering Education, Vol. 90 No. 1, pp. 131-136.

Erickson, R. Cohen, D. and Ray, R. (2022), “The reimagined workplace two years later”, available at: www.conference-board.org/topics/natural-disasters-pandemics/reimagined-workplace-two-years-later-2022 (accessed 18 May 2022).

Faux, T.L. and Black-Hughes, C. (2000), “A comparison of using the internet versus lectures to teach social work history”, Research On Social Work Practice, Vol. 10 No. 4, pp. 454-466.

Guo, S. (2020), “Synchronous versus asynchronous online teaching of physics during the COVID-19 pandemic”, Physics Education, Vol. 55 No. 6, pp. 1-9, doi: 10.1088/1361-6552/ABA1C5.

Hurlbut, R. (2018), “Online vs. traditional learning in teacher education: a comparison of student progress”, American Journal Of Distance Education, Vol. 32 No. 4, pp. 248-266.

Institute for Higher Education Policy (1999), What's the Difference: A Review of Contemporary Research on the Effectiveness of Distance Learning in Higher Education, Washington, DC: Author.

Jafar, S. and Sitther, V. (2021), “Comparison of student outcomes and evaluations in hybrid versus face-to-face anatomy and physiology I courses”, Journal Of College Science Teaching, Vol. 51 No. 7, pp. 58-66.

Kalpokaite, N. and Radivojevic, I. (2020), “Teaching qualitative data analysis software online: a comparison of face-to-face and e-learning ATLAS.Ti courses”, International Journal Of Research And Method In Education, Vol. 43 No. 3, pp. 296-310, doi: 10.1080/1743727X.2019.1687666.

Lederman, D. (2021), “Detailing last fall’s online enrollment surge”, inside higher Ed, September 16, available at: www.insidehighered.com/news/2021/09/16/new-data-offer-sense-how-covid-expanded-online-learning (accessed 1 June 2022).

Lee, S. and Deale, C.S. (2019), “Rapport, rigor, and rate my professor: students’ perceptions of hospitality and tourism professors”, Journal Of Teaching In Travel And Tourism, Vol. 19 No. 2, pp. 93-111.

Leshchenko, A. and Bezlutska, O. (2021), “Traditional Vs online education in the maritime training system under COVID-19 pandemic: comparative analysis”, Pedagogy, Vol. 93 No. 7, pp. 86-95, doi: 10.53656/ped21-7s.07trad.

Levanon, G., Crofoot, E. and Steemers, F. (2003), “COVID-19's biggest legacy: remote work and its implications for the postpandemic labor market in the US”, The Conference Board Research Report, available at: https://conference-board.org/topics/remote-work/Remote-Work-COVID-19-Biggest-Legacy (accessed 1 June 2020).

Mamutovic, A. (2021), “Online teaching and student’s academic achievements”, ELearning And Software For Education, Vol. 1, pp. 210-217, doi: 10.12753/2066-026X-21-028.

Mentzer, G., Cryan, R. and Teclehaimanot, B. (2007), “Two peas in a pod? A comparison of FTF and web-based classrooms”, Journal Of Technology And Teacher Education, Vol. 15 No. 2, pp. 233-246.

Nguyen, T. (2015), “The effectiveness of online learning: beyond no significant difference and future horizons”, MERLOT Journal Of Online Learning And Teaching, Vol. 11 No. 2, pp. 309-319.

Olson, D. (2002), “A comparison of online and lecture methods for delivering the CS 1 course”, Journal Of Computing Sciences In Colleges, Vol. 18 No. 2, pp. 57-63.

Otts, P., Lobanova, Y., Bocharnikova, N., Panfilova, V. and Panfilov, A. (2021), “Modification of the role of a teacher under the conditions of distance learning”, International Journal Of Emerging Technologies In Learning (IJET), Vol. 16 No. 21, pp. 219-225.

Park, J.H. and Choi, H.J. (2009), “Factors influencing adult learners’ decision to drop out or persist in online learning”, Educational Technology And Society, Vol. 12 No. 4, pp. 207-217.

Poirier, C. and Feldman, R. (2004), “Teaching in cyberspace: online versus traditional instruction using a waiting-list experimental design”, Teaching Of Psychology, Vol. 31 No. 1, pp. 59-62, doi: 10.1207/s15328023top3101_11.

R Core Team (2019), “R: a language and environment for statistical computing”, The R Foundation for Statistical Computing, Vienna, available at: www.R-project.org (accessed 1 June 2022).

Rehbock, S., Verdorfer, A. and Knipfer, K. (2021), “Rate my professor: implicit leadership theories in academia”, Studies In Higher Education, Vol. 46 No. 8, pp. 1590-1602.

Russell, T. (1999), The No Significant Difference Phenomenon, NC State University, Raleigh, NC.

Russell, J., Van Horne, S., Ward, A., Sipola, M., Colombo, M. and Rocheford, M. (2016), “Large lecture transformation: adopting evidence‐based practices to increase student engagement and performance in an introductory science course”, Journal Of Geoscience Education, Vol. 64 No. 1, pp. 37-51.

Shea, P. and Bidjerano, T. (2014), “Does online learning impede degree completion? A national study of community college students”, Computers And Education, Vol. 75, pp. 103-111.

Shim, T. and Lee, S. (2020), “College students’ experience of emergency remote teaching due to COVID-19”, Children and Youth Services Review, Vol. 119, doi: 10.1016/j.childyouth.2020.105578.

Silva, K., Silva, F., Quinn, M., Draper, J., Cover, K. and Munoff, A. (2008), “Rate My professor: online evaluations of psychology instructors”, Teaching Of Psychology, Vol. 35 No. 2, pp. 71-80, doi: 10.1080/00986280801978434.

Soffer, T. and Nachmias, R. (2018), “Effectiveness of learning in online academic courses compared with face to face courses in higher education”, Journal Of Computer Assisted Learning, Vol. 34 No. 5, pp. 534-543, doi: 10.1111/JCAL.12258.

Sokout, H. and Usagawa, T. (2021), “Improving academic performance through blended learning: the case of Afghan higher education”, International Journal Of Emerging Technologies In Learning (IJET), Vol. 16 No. 11, pp. 104-120.

Tamim, B., Borokhoviski, E., Abraimi, P. and Schmid, R. (2011), “What forty years of research says about the impact of technology on learning: a second-order meta-analysis and validation study”, Review Of Educational Research, Vol. 81 No. 1, pp. 4-28.

Terry, N. (2001), “Assessing enrollment and attrition rates for the online MBA”, The Journal (Technological Horizons In Education), Vol. 28 No. 7, p. 64.

Tratnik, A., Urh, M. and Jereb, E. (2019), “Student satisfaction with an online and face-to-face business English course in a higher education context”, Innovations In Education And Teaching International, Vol. 56 No. 1, pp. 36-45.

Yang, D., Baldwin, S. and Snelson, C. (2017), “Persistence factors revealed: students’ reflections on completing a fully online program”, Distance Education, Vol. 38 No. 1, pp. 23-36.

Corresponding author

Joseph Cavanaugh can be contacted at: joseph.cavanaugh@wright.edu

Related articles