Scheduled Time: Tue, May 4 – 12:25pm – 1:55pm Building/Room: Colorado Convention Center / Room 205
Title Displayed in Event Calendar: Online Learning Research: Improving the Status Quo
Discussant: Bernadette Adams Yates (U.S. Department of Education)
Abstract: The use of online learning among K-12 populations is growing rapidly, yet rigorous research in this area is only recently emerging. This session will present findings from three important new studies of online learning and specifically address implications for the K-12 education system. Two of the studies are meta-analyses of the online learning literature, and the third study uses regression techniques to compare online learning to learning in traditional classrooms. In addition, researchers will highlight the limitations of currently available data and focus on efforts to improve data quality through the lens of the Virtual School Clearinghouse and efforts underway at the Missouri Virtual School.
Objectives of the session. This session has three important objectives: 1) to discuss findings from three recent, rigorous studies addressing the efficacy of online learning for K-12 populations; 2) examine current data limitations and the impact of these limitations on progress in the field of online learning research, and 3) identify data elements required to advance research and recommend data elements that should be collected by districts, states, and online learning providers.
Overview of the presentation. Two different meta-analyses addressing online learning will be presented. One found positive effects for online learning in controlled studies conducted between 1996-2008, but only 5 of the 51 effects were obtained with K-12 students, and many of the studies had shortcomings that make them a poor guide for practice. Based on a second meta-analysis, we will address a methodology for comparing online/DE treatments with other online/DE treatments as a step beyond classroom comparative studies. A regression analysis using administrative data to compare outcomes for high school students taking courses online and face-to-face learning will also be presented. Prior to conducting this analysis, researchers conducted an extensive search for adequate data in state systems but found only one that could provide sufficient data to conduct the study. These presentations will be followed by information about the Virtual School Clearinghouse, a collaborative research project that provides virtual schools with analytic tools and metrics vital for school improvement. Finally, Missouri Virtual School data systems will be discussed including lessons learned and applicability to other virtual schools.
Scholarly Significance. Research on online learning, particularly in K-12 populations, is sparse. In order to address this need, this session will provide findings from three recent studies. Participants also will have an opportunity to discuss common data needs across projects in addition to efforts to improve data quality. The session has important implications for the design of state and district data systems, which are particularly important at a time when the federal government is investing considerable sums of money to improve the utility of these systems. Without efforts to improve the availability of high-quality data related to online learning environments, researchers and practitioners will miss opportunities to maximize investments in online learning.
Structure of the session. Speakers will make 10 minute presentations followed by up to 5 minutes for question and answer. The discussant will speak for 5 minutes and then moderate a discussion among participants for the remaining time. The session will begin with two meta-analyses, with a focus both on relevance to K-12 populations as well as the limitations of data available. A quantitative study comparing online learning to face to face classes will follow, with an emphasis both on the results of the analysis and the limitations of the available data. These three presentations will be followed by an overview of the Virtual School Clearinghouse, an effort to improve data quality and analysis among virtual schools, and use the Missouri Virtual School as a leading example of how to implement required data systems.
Limitations of Experimental and Quasi-Experimental Studies as a Guide to Practice in Online Learning
Authors: *Barbara M. Means (SRI International)
Robert F. Murphy (SRI International)
Yukle Toyama (Sri International)
Abstract: A meta-analysis of experimental and quasi-experimental studies contrasting online and face-to-face instruction investigated the relative effectiveness of the two approaches and sought to identify aspects of online learning implementation associated with more positive learning outcomes. Perspective: By limiting studies in the meta-analysis to those with rigorous designs that used an objective measure of student learning, this investigation sought to use research as a basis for developing guidance for practice. Data Sources: Computerized searches of online research databases for the years from 1996 through 2008 supplemented by a review of citations in prior meta-analyses of distance learning and a manual search of the last three years of key journals returned 1,132 abstracts from studies of online learning. From these, 99 online learning research studies were identified that compared online (either entirely online or a blended condition with both online and face-to-face components) and face-to-face learning conditions and used an experimental or quasi-experimental design with an objective measure of student learning. Methods: Of the 99 studies comparing online and face-to-face conditions, 46 provided sufficient data to compute or estimate 51 independent effect sizes. After a test for homogeneity of effects found significant variability in the effect sizes for the different online learning studies, subsequent analyses focused on uncovering moderator variables that could explain the differences in outcomes. Results and Conclusions: Across the 51 individual study effects, the mean difference between online and face-to-face conditions was positive and statistically significant ( p < .01 ) but only 5 of the effects came from studies of K-12 learners. The size and direction of the difference between online and face-to-face conditions varied significantly across studies. The attempt to identify implementation variables that moderated learning effects was impeded by the failure of studies contrasting learning outcomes for online and face-to-face instruction to include information about many key aspects of implementation. On average, a given implementation variable was documented in just over two-thirds of the studies in the meta-analysis. Of the 13 implementation practices investigated, only two achieved statistical significance: (a) the use of a blended rather than a purely online approach and (b) the expansion of time on task for online learners. Significant effects might have been found for other practices if they had been documented in more studies. Scholarly Significance of the Study: Although the research designs used by the studies in the meta-analysis were strong (i.e., experimental or controlled quasi-experimental), the study corpus provided little guidance with respect to best practices in online learning because so many studies failed to document basic aspects of implementation. Moreover, many of the studies had weaknesses such as small sample sizes; failure to report retention rates for students in the conditions being contrasted; and potential bias stemming from the authors’ dual roles as experimenters and instructors. These findings illustrate the importance of employing multiple methodological approaches in order to derive guidance for educational practice.
Advancing a Methodology for Comparing Alternative Distance Education Designs
Author: *Robert M. Bernard (Concordia University)
Abstract: This presentation has two interrelated parts: 1) the description of a recently published meta-analysis of three interaction treatments in distance education (Bernard, Abrami, Borokhovski, Wade et al. 2009); 2) a discussion of the methodology used in this study to address between-treatment contrasts (i.e., DE interaction treatment 1 vs. DE interaction treatment 2). This is a meta-analysis of the experimental literature of distance education (DE) comparing different types of interaction treatments (ITs) with other DE instructional treatments. ITs are the instructional and/or media conditions designed into DE courses, which are intended to facilitate student-student (SS), student-teacher (ST) or student-content (SC) interactions. Seventy-four DE vs. DE studies that contained at least one IT were included in the meta-analysis. These studies yielded 74 achievement effects. The effect size valences were structured so that the interaction treatment (IT), or the stronger IT (i.e., in the case of two ITs), served as the experimental condition and the other treatment, the control condition. Effects were categorized as SS, ST or SC. After adjustment for methodological quality, the overall weighted average effect size for achievement was 0.39 for achievement. The average effect size was heterogeneous. Overall, the results supported the importance of the three types of ITs. As well, strength of ITs was found to be associated with increasing achievement outcomes. A strong association was found between strength and achievement for asynchronous DE courses compared to courses containing mediated synchronous or face-to-face interaction. The results are interpreted in terms of increased cognitive engagement that is presumed to be promoted by strengthening ITs in distance education courses. The methodology for addressing interaction treatments involved developing a theoretical framework and rubric for establishing the valence (i.e., +/–) of the treatment distinctions within categories of interaction treatments—student-student, student-teacher and student-content interaction. Examples of this will be given. In addition, a methodology was developed for assessing the relative interaction strength of each treatment pair and for judging inter-rater reliability. Studies were coded for low, medium or high interaction potential, and these were subsequently used to judge the effects of the strength of categories of interaction treatments and the strength of combinations of categories. Examples of adaptations of this methodology to other questions will also be described (Schmid, Bernard, Borokhovski, Tamim et al., 2009).
References: Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R., Surkes, M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research. Prepublished July, 6, 2009, doi:10.3102/0034654309333844. Schmid, R. F., Bernard, R.M., Borokhovski, E., Tamim, R., Abrami, P.C., Wade, C.A., Surkes, M.A., Lowerison, G. (2009). Technology’s effect on achievement in higher education: A Stage I meta-analysis of classroom applications. Journal of Computing in Higher Education (2009) 21:95–109 DOI 10.1007/s12528-009-9021-8
Using Educational Data Systems to Support Policy-Relevant Online Learning Research
Authors: *Marianne F. Bakia (SRI International)
Kyra Caspary (SRI International)
Abstract: This study conducted quantitative regression analysis on student-level data to address the lack of rigorous, quantitative studies that compare learning gains by K-12 students in online courses to those in traditional, face to face courses. Theoretical framework. Regression analysis statistically controls for variance in outcomes associated with student, teacher and instructional characteristics in order to estimate the effects of education or variations in educational characteristics. (Coleman, 1966; Burless, 1996). Methods. Characteristics of students taking online and face-to-face versions of 2 courses required for graduation in the state of Florida were compared. Students enrolled with the Florida Virtual School (FLVS) were matched with students in traditional schools using propensity score matching on several variables such as grade, gender, free and reduced price lunch, previous course failures, etc. These were also used as independent variables in equations using three different measures of student achievement: 1) course passing, 2) course grades, and 3) Stanford 10 scores from spring 2007. The null hypothesis was that the mean or proportion is the same for online and face-to-face students. Data sources. We constructed a database using student-level data from two sources: 1) Florida Virtual School (FLVS), and 2) the Florida Education Data Warehouse (EDW). Florida appeared to be the only state able to provide adequate data for analysis. Other states either did not track the online status of a course, had insufficient numbers of online enrollments, or could not provide comparable data for both face to face and online students. Results. Despite the relatively large scale operation of the Florida Virtual School, students that enroll in online courses are different than their peers in significant ways. Although there was some variation of results by course, students in online courses tended to be more affluent, less racially diverse, and from better performing schools than the population at large. However, even once a carefully constructed matched sample of online students was used to compare the outcomes of students enrolled online to those enrolled in conventional courses, online students performed better consistently across different outcome measures. Several limitations of the data are notable, such as 1) the Florida system does not provide a “point in time GPA” which limits researchers’ ability to control for prior achievement, 2) FL law makes it difficult to impossible to relate teacher characteristics to student academic outcomes– even if the data is anonymzed, and 3) the FL system uses a four point scale to measure student academic achievement. Scholarly Significance. The results from this study contribute to the relatively sparse literature examining student learning in online environments (Means, Murphy, Toyoma, Bakia and Jones, 2009). In addition, the search for adequate data highlights important limitations of currently available data in most systems across the country. With relatively small changes in district and state data systems, researchers would have access to rich, policy-relevant data. Without such adjustments, construction of adequate data sets will require intensive time and financial investments and therefore limit the quality and quantity of future research in this area.
Data for Understanding Virtual School Practice
Author: *Cathy Cavanaugh (University of Florida)
Abstract: The purpose of this presentation is to overview the structures and uses of data systems designed to inform practice in virtual schools in regard to leadership, pedagogy, course, design and policy. The outcomes and implications reported in this presentation have resulted from a three-year program with virtual schools in 20 U.S. states. Perspective. Virtual schools, because they provide instruction via online learning management systems, have an unprecedented opportunity to collect and interpret data at student, teacher, course, program, and school levels to guide practice. When school data are joined with student and teacher demographic data, human resource data, and state achievement data, powerful patterns in teaching and learning can be identified. Most virtual schools have data systems that allow descriptive “what” questions to be asked and answered, but few have the systems and analysis methods in place to ask and answer interpretive “why” questions that move practice forward in substantial ways. Methods and Sources. An extensive catalog has been developed for collecting over 100 data elements from virtual schools. The data elements were drawn from the National Center for Educational Statistics Common Core of Data and from the codebooks of recent meta-analyses of K-12 online learning (Cavanaugh, 2001; Cavanaugh et al., 2004). Virtual schools’ data sets were collected in an online data system from which schools could query their data and create reports. Results. Virtual schools vary widely in their definitions and data collection methods. For example, the definition of a course completion and a successful student depend on the grace period used by the school after which a student is considered enrolled in a course and the criteria for success. No standards have been adopted for such fundamental issues in the U.S. virtual schools community. Therefore, each school’s data system is unique and analyses based on those data are meaningful over time relative to the school’s specific context. While data across virtual schools cannot be easily merged and conclusions based on data from virtual schools cannot be generalized, each school is able to make decisions based on analysis of very fine-grained data. Significance. Uniformity is needed in the virtual schools community around basic definitions in order to learn across school environments. Millions of U.S. students and thousands of teachers, in numbers growing rapidly each year, participate in virtual schooling. Gaps in virtual school data systems mean losses of knowledge. However, individual virtual school data systems and their uses of data can be models for K-12 education nationally because of the scope of data collected and ten-year history of informed practice based on the data.
References: Cavanaugh, C. (2001). The Effectiveness of Interactive Distance Education Technologies in K-12 Learning: A Meta-Analysis, International Journal of Educational Telecommunications 7(1), 73-88. Cavanaugh, C., Blomeyer, B., Gillan, K., Kromrey, J., Hess, M. (2004). The Effects of Distance Education Systems on K-12 Student Outcomes: A Meta-Analysis. Naperville, IL: North Central Regional Educational Laboratory.
Structure of Virtual School Data Systems for Informing Practice
Authors: *Thomas A. Clark (TA Consulting)
Cathy Cavanaugh (University of Florida)
Abstract: The purpose of this presentation is to highlight best practice in the design and use of virtual school data systems to inform practice. The outcomes and implications reported in this presentation are based on a two-year state virtual school evaluation process. Perspective. A state-run virtual school offers online learning to children in grades K through 12 residing in the state. As a public school district by law operated under the authority of the state education agency, it has full access to disaggregated student record data at the school and state level. The state mandates specific demographic and performance data, the school collects additional achievement and other data, and the learning management system captures course behavior data. When the resulting data are systematically collected and linked, important questions can be asked and answered that illuminate specific strengths and needs in the school. Deliberate attention in the early stages of a virtual school’s development to the data systems is likely to benefit the school leaders and teachers through efficient and timely access to performance outcomes at course, department, and school levels (Smith, Clark & Blomeyer, 2005). Methods. One key evaluation question concerned success factors in K-12 online learning. Data were extracted from the school’s data systems during its first year. The data included over 14,000 course enrollments and associated information related to the course, teacher, student, learning management system logs, and grades. The influence of time student spent in the learning management system (LMS), number of times logged into the LMS, teacher comment, participation in free or reduced lunch programs, student status in the virtual school (full time or part time student), race/ethnicity, and grade level in the physical school student attends on student academic achievement were investigated using hierarchical linear modeling. A variety of analyses pertinent to program evaluation or educational research are feasible with available data. Results. The design of the school’s data systems enabled analysis of data across all course instances related to a wide range of factors with potential impact on student performance in the courses. Several factors were found to have significant influence on student achievement, and these factors varied depending on the level and content area of the course. This insight into the interplay of multiple factors has supported decision-making within the school. Significance. Given the dearth of research on success factors in K-12 online learning environment for high enrollment courses, this study informs researchers, educators, course designers, online program administrators, policy makers, and classroom-based educators. The investigation provides a deeper understanding of success in K-12 virtual learning environment by providing a foundation for the decision making process in virtual schools with respect to the improvement of course design and school policy.
References: Smith, R., Clark, T. & Blomeyer, R. (2005). A Synthesis of New Research on K–12 Online Learning. Naperville, IL: North Central Regional Educational Laboratory.