Virtual School Meanderings

June 21, 2010

AERA 2010 And Virtual Schooling Round-Up

Well, earlier today I posted the last of the entries from the AERA 2010 conference.  As many of these entries were posted after the fact, but backdated to when they would have occurred, I suspect that those who follow this blog without the use of an RSS reader may have missed them.  As such, I wanted to post a listing of all of the entries here.

And all of these entries are accessible under the area 2010 tag.

May 4, 2010

AERA 2010 – Online Learning Research: Improving the Status Quo

Please note that I missed the early morning session by Robert Wilson on Reducing Summer Setback With Rural Middle School Students Who Participate in an Online Learning Community because I joined my wife on a tour of the Denver US Mint. The first session on K-12 online learning that I had outlined for Tuesday at the AERA 2010 conference that I was actually able to attend was:

Online Learning Research: Improving the Status Quo

Sponsor: Division C – Learning and Instruction
Section 7: Technology Research

Scheduled Time: Tue, May 4 – 12:25pm – 1:55pm Building/Room: Colorado Convention Center / Room 205
Title Displayed in Event Calendar: Online Learning Research: Improving the Status Quo

Discussant: Bernadette Adams Yates (U.S. Department of Education)

Abstract: The use of online learning among K-12 populations is growing rapidly, yet rigorous research in this area is only recently emerging. This session will present findings from three important new studies of online learning and specifically address implications for the K-12 education system. Two of the studies are meta-analyses of the online learning literature, and the third study uses regression techniques to compare online learning to learning in traditional classrooms. In addition, researchers will highlight the limitations of currently available data and focus on efforts to improve data quality through the lens of the Virtual School Clearinghouse and efforts underway at the Missouri Virtual School.

Session Summary

Objectives of the session. This session has three important objectives: 1) to discuss findings from three recent, rigorous studies addressing the efficacy of online learning for K-12 populations; 2) examine current data limitations and the impact of these limitations on progress in the field of online learning research, and 3) identify data elements required to advance research and recommend data elements that should be collected by districts, states, and online learning providers.
Overview of the presentation. Two different meta-analyses addressing online learning will be presented. One found positive effects for online learning in controlled studies conducted between 1996-2008, but only 5 of the 51 effects were obtained with K-12 students, and many of the studies had shortcomings that make them a poor guide for practice. Based on a second meta-analysis, we will address a methodology for comparing online/DE treatments with other online/DE treatments as a step beyond classroom comparative studies. A regression analysis using administrative data to compare outcomes for high school students taking courses online and face-to-face learning will also be presented. Prior to conducting this analysis, researchers conducted an extensive search for adequate data in state systems but found only one that could provide sufficient data to conduct the study. These presentations will be followed by information about the Virtual School Clearinghouse, a collaborative research project that provides virtual schools with analytic tools and metrics vital for school improvement. Finally, Missouri Virtual School data systems will be discussed including lessons learned and applicability to other virtual schools.
Scholarly Significance. Research on online learning, particularly in K-12 populations, is sparse. In order to address this need, this session will provide findings from three recent studies. Participants also will have an opportunity to discuss common data needs across projects in addition to efforts to improve data quality. The session has important implications for the design of state and district data systems, which are particularly important at a time when the federal government is investing considerable sums of money to improve the utility of these systems. Without efforts to improve the availability of high-quality data related to online learning environments, researchers and practitioners will miss opportunities to maximize investments in online learning.
Structure of the session. Speakers will make 10 minute presentations followed by up to 5 minutes for question and answer. The discussant will speak for 5 minutes and then moderate a discussion among participants for the remaining time. The session will begin with two meta-analyses, with a focus both on relevance to K-12 populations as well as the limitations of data available. A quantitative study comparing online learning to face to face classes will follow, with an emphasis both on the results of the analysis and the limitations of the available data. These three presentations will be followed by an overview of the Virtual School Clearinghouse, an effort to improve data quality and analysis among virtual schools, and use the Missouri Virtual School as a leading example of how to implement required data systems.

Limitations of Experimental and Quasi-Experimental Studies as a Guide to Practice in Online Learning

Authors: *Barbara M. Means (SRI International)
Robert F. Murphy (SRI International)
Yukle Toyama (Sri International)

Abstract: A meta-analysis of experimental and quasi-experimental studies contrasting online and face-to-face instruction investigated the relative effectiveness of the two approaches and sought to identify aspects of online learning implementation associated with more positive learning outcomes. Perspective: By limiting studies in the meta-analysis to those with rigorous designs that used an objective measure of student learning, this investigation sought to use research as a basis for developing guidance for practice. Data Sources: Computerized searches of online research databases for the years from 1996 through 2008 supplemented by a review of citations in prior meta-analyses of distance learning and a manual search of the last three years of key journals returned 1,132 abstracts from studies of online learning. From these, 99 online learning research studies were identified that compared online (either entirely online or a blended condition with both online and face-to-face components) and face-to-face learning conditions and used an experimental or quasi-experimental design with an objective measure of student learning. Methods: Of the 99 studies comparing online and face-to-face conditions, 46 provided sufficient data to compute or estimate 51 independent effect sizes. After a test for homogeneity of effects found significant variability in the effect sizes for the different online learning studies, subsequent analyses focused on uncovering moderator variables that could explain the differences in outcomes. Results and Conclusions: Across the 51 individual study effects, the mean difference between online and face-to-face conditions was positive and statistically significant ( p < .01 ) but only 5 of the effects came from studies of K-12 learners. The size and direction of the difference between online and face-to-face conditions varied significantly across studies. The attempt to identify implementation variables that moderated learning effects was impeded by the failure of studies contrasting learning outcomes for online and face-to-face instruction to include information about many key aspects of implementation. On average, a given implementation variable was documented in just over two-thirds of the studies in the meta-analysis. Of the 13 implementation practices investigated, only two achieved statistical significance: (a) the use of a blended rather than a purely online approach and (b) the expansion of time on task for online learners. Significant effects might have been found for other practices if they had been documented in more studies. Scholarly Significance of the Study: Although the research designs used by the studies in the meta-analysis were strong (i.e., experimental or controlled quasi-experimental), the study corpus provided little guidance with respect to best practices in online learning because so many studies failed to document basic aspects of implementation. Moreover, many of the studies had weaknesses such as small sample sizes; failure to report retention rates for students in the conditions being contrasted; and potential bias stemming from the authors’ dual roles as experimenters and instructors. These findings illustrate the importance of employing multiple methodological approaches in order to derive guidance for educational practice.

Advancing a Methodology for Comparing Alternative Distance Education Designs

Author: *Robert M. Bernard (Concordia University)

Abstract: This presentation has two interrelated parts: 1) the description of a recently published meta-analysis of three interaction treatments in distance education (Bernard, Abrami, Borokhovski, Wade et al. 2009); 2) a discussion of the methodology used in this study to address between-treatment contrasts (i.e., DE interaction treatment 1 vs. DE interaction treatment 2). This is a meta-analysis of the experimental literature of distance education (DE) comparing different types of interaction treatments (ITs) with other DE instructional treatments. ITs are the instructional and/or media conditions designed into DE courses, which are intended to facilitate student-student (SS), student-teacher (ST) or student-content (SC) interactions. Seventy-four DE vs. DE studies that contained at least one IT were included in the meta-analysis. These studies yielded 74 achievement effects. The effect size valences were structured so that the interaction treatment (IT), or the stronger IT (i.e., in the case of two ITs), served as the experimental condition and the other treatment, the control condition. Effects were categorized as SS, ST or SC. After adjustment for methodological quality, the overall weighted average effect size for achievement was 0.39 for achievement. The average effect size was heterogeneous. Overall, the results supported the importance of the three types of ITs. As well, strength of ITs was found to be associated with increasing achievement outcomes. A strong association was found between strength and achievement for asynchronous DE courses compared to courses containing mediated synchronous or face-to-face interaction. The results are interpreted in terms of increased cognitive engagement that is presumed to be promoted by strengthening ITs in distance education courses. The methodology for addressing interaction treatments involved developing a theoretical framework and rubric for establishing the valence (i.e., +/–) of the treatment distinctions within categories of interaction treatments—student-student, student-teacher and student-content interaction. Examples of this will be given. In addition, a methodology was developed for assessing the relative interaction strength of each treatment pair and for judging inter-rater reliability. Studies were coded for low, medium or high interaction potential, and these were subsequently used to judge the effects of the strength of categories of interaction treatments and the strength of combinations of categories. Examples of adaptations of this methodology to other questions will also be described (Schmid, Bernard, Borokhovski, Tamim et al., 2009).

References: Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R., Surkes, M. A., & Bethel, E. C. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research. Prepublished July, 6, 2009, doi:10.3102/0034654309333844. Schmid, R. F., Bernard, R.M., Borokhovski, E., Tamim, R., Abrami, P.C., Wade, C.A., Surkes, M.A., Lowerison, G. (2009). Technology’s effect on achievement in higher education: A Stage I meta-analysis of classroom applications. Journal of Computing in Higher Education (2009) 21:95–109 DOI 10.1007/s12528-009-9021-8

Using Educational Data Systems to Support Policy-Relevant Online Learning Research

Authors: *Marianne F. Bakia (SRI International)
Kyra Caspary (SRI International)

Abstract: This study conducted quantitative regression analysis on student-level data to address the lack of rigorous, quantitative studies that compare learning gains by K-12 students in online courses to those in traditional, face to face courses. Theoretical framework. Regression analysis statistically controls for variance in outcomes associated with student, teacher and instructional characteristics in order to estimate the effects of education or variations in educational characteristics. (Coleman, 1966; Burless, 1996). Methods. Characteristics of students taking online and face-to-face versions of 2 courses required for graduation in the state of Florida were compared. Students enrolled with the Florida Virtual School (FLVS) were matched with students in traditional schools using propensity score matching on several variables such as grade, gender, free and reduced price lunch, previous course failures, etc. These were also used as independent variables in equations using three different measures of student achievement: 1) course passing, 2) course grades, and 3) Stanford 10 scores from spring 2007. The null hypothesis was that the mean or proportion is the same for online and face-to-face students. Data sources. We constructed a database using student-level data from two sources: 1) Florida Virtual School (FLVS), and 2) the Florida Education Data Warehouse (EDW). Florida appeared to be the only state able to provide adequate data for analysis. Other states either did not track the online status of a course, had insufficient numbers of online enrollments, or could not provide comparable data for both face to face and online students. Results. Despite the relatively large scale operation of the Florida Virtual School, students that enroll in online courses are different than their peers in significant ways. Although there was some variation of results by course, students in online courses tended to be more affluent, less racially diverse, and from better performing schools than the population at large. However, even once a carefully constructed matched sample of online students was used to compare the outcomes of students enrolled online to those enrolled in conventional courses, online students performed better consistently across different outcome measures. Several limitations of the data are notable, such as 1) the Florida system does not provide a “point in time GPA” which limits researchers’ ability to control for prior achievement, 2) FL law makes it difficult to impossible to relate teacher characteristics to student academic outcomes– even if the data is anonymzed, and 3) the FL system uses a four point scale to measure student academic achievement. Scholarly Significance. The results from this study contribute to the relatively sparse literature examining student learning in online environments (Means, Murphy, Toyoma, Bakia and Jones, 2009). In addition, the search for adequate data highlights important limitations of currently available data in most systems across the country. With relatively small changes in district and state data systems, researchers would have access to rich, policy-relevant data. Without such adjustments, construction of adequate data sets will require intensive time and financial investments and therefore limit the quality and quantity of future research in this area.

Data for Understanding Virtual School Practice

Author: *Cathy Cavanaugh (University of Florida)

Abstract: The purpose of this presentation is to overview the structures and uses of data systems designed to inform practice in virtual schools in regard to leadership, pedagogy, course, design and policy. The outcomes and implications reported in this presentation have resulted from a three-year program with virtual schools in 20 U.S. states. Perspective. Virtual schools, because they provide instruction via online learning management systems, have an unprecedented opportunity to collect and interpret data at student, teacher, course, program, and school levels to guide practice. When school data are joined with student and teacher demographic data, human resource data, and state achievement data, powerful patterns in teaching and learning can be identified. Most virtual schools have data systems that allow descriptive “what” questions to be asked and answered, but few have the systems and analysis methods in place to ask and answer interpretive “why” questions that move practice forward in substantial ways. Methods and Sources. An extensive catalog has been developed for collecting over 100 data elements from virtual schools. The data elements were drawn from the National Center for Educational Statistics Common Core of Data and from the codebooks of recent meta-analyses of K-12 online learning (Cavanaugh, 2001; Cavanaugh et al., 2004). Virtual schools’ data sets were collected in an online data system from which schools could query their data and create reports. Results. Virtual schools vary widely in their definitions and data collection methods. For example, the definition of a course completion and a successful student depend on the grace period used by the school after which a student is considered enrolled in a course and the criteria for success. No standards have been adopted for such fundamental issues in the U.S. virtual schools community. Therefore, each school’s data system is unique and analyses based on those data are meaningful over time relative to the school’s specific context. While data across virtual schools cannot be easily merged and conclusions based on data from virtual schools cannot be generalized, each school is able to make decisions based on analysis of very fine-grained data. Significance. Uniformity is needed in the virtual schools community around basic definitions in order to learn across school environments. Millions of U.S. students and thousands of teachers, in numbers growing rapidly each year, participate in virtual schooling. Gaps in virtual school data systems mean losses of knowledge. However, individual virtual school data systems and their uses of data can be models for K-12 education nationally because of the scope of data collected and ten-year history of informed practice based on the data.

References: Cavanaugh, C. (2001). The Effectiveness of Interactive Distance Education Technologies in K-12 Learning: A Meta-Analysis, International Journal of Educational Telecommunications 7(1), 73-88. Cavanaugh, C., Blomeyer, B., Gillan, K., Kromrey, J., Hess, M. (2004). The Effects of Distance Education Systems on K-12 Student Outcomes: A Meta-Analysis. Naperville, IL: North Central Regional Educational Laboratory.

Structure of Virtual School Data Systems for Informing Practice

Authors: *Thomas A. Clark (TA Consulting)
Cathy Cavanaugh (University of Florida)

Abstract: The purpose of this presentation is to highlight best practice in the design and use of virtual school data systems to inform practice. The outcomes and implications reported in this presentation are based on a two-year state virtual school evaluation process. Perspective. A state-run virtual school offers online learning to children in grades K through 12 residing in the state. As a public school district by law operated under the authority of the state education agency, it has full access to disaggregated student record data at the school and state level. The state mandates specific demographic and performance data, the school collects additional achievement and other data, and the learning management system captures course behavior data. When the resulting data are systematically collected and linked, important questions can be asked and answered that illuminate specific strengths and needs in the school. Deliberate attention in the early stages of a virtual school’s development to the data systems is likely to benefit the school leaders and teachers through efficient and timely access to performance outcomes at course, department, and school levels (Smith, Clark & Blomeyer, 2005). Methods. One key evaluation question concerned success factors in K-12 online learning. Data were extracted from the school’s data systems during its first year. The data included over 14,000 course enrollments and associated information related to the course, teacher, student, learning management system logs, and grades. The influence of time student spent in the learning management system (LMS), number of times logged into the LMS, teacher comment, participation in free or reduced lunch programs, student status in the virtual school (full time or part time student), race/ethnicity, and grade level in the physical school student attends on student academic achievement were investigated using hierarchical linear modeling. A variety of analyses pertinent to program evaluation or educational research are feasible with available data. Results. The design of the school’s data systems enabled analysis of data across all course instances related to a wide range of factors with potential impact on student performance in the courses. Several factors were found to have significant influence on student achievement, and these factors varied depending on the level and content area of the course. This insight into the interplay of multiple factors has supported decision-making within the school. Significance. Given the dearth of research on success factors in K-12 online learning environment for high enrollment courses, this study informs researchers, educators, course designers, online program administrators, policy makers, and classroom-based educators. The investigation provides a deeper understanding of success in K-12 virtual learning environment by providing a foundation for the decision making process in virtual schools with respect to the improvement of course design and school policy.

References: Smith, R., Clark, T. & Blomeyer, R. (2005). A Synthesis of New Research on K–12 Online Learning. Naperville, IL: North Central Regional Educational Laboratory.

Now I know that was a lot of quoted text from the program, but it provides a good background on or context for what each of the panelist were going to discuss.

The moderator, Marianne, indicated that the purpose of this session was to examine some of the empirical findings from recent studies and, also, to look at the difficulties or limitations with some of these methodologies.  Marianne also mentioned that the session would spend some time looking at ways to approach future research.

Barbara Means began the panel with their paper on “Limitations of Meta-Analyses of Experimental Studies as a Guide to Practices in Online Learning”. Their definition of online learning was constraints to learning where the majority of learning took place over the Internet.  She began by outlining questions that she felt that policymakers and practitioners ask:

  1. Does this innovation merit implementation?
  2. How should we implement this innovation?

Barbara then went on to describe their study, which began with a search of the literature from 1995 to July 2008 of online learning that was experimental or quasi-experimental, controlled for pre-existing conditions, and had an effect size.  Of the 1132 articles they considered, only 99 used an experimental or quasi-experimental and only 46 had the statistical information to calculate effect size.  The results were a +0.24 average effect size in favour of the online condition. Note this study was not exclusive to the K-12 environment, so these results include both K-12 and adult learners.

They also tested 13 different practices as potential moderators of the online advantage over face-to-face classes.  It was found that the advantage for online learning over face-to-face instruction was larger for studies using blended learning approaches (effect size of +0.35) than studies using pure e-learning (effect size of +0.14). Also, the advantage of online learning over face-to-face was larger for studies where:

  • Online students spent more time learning than did those in the face-to-face class
  • The online and face-to-face conditions varied in terms of content and instructional approach

Taken as a whole, the findings suggest that the observed advantage of online is a product of redesigning the learning experience, not the medium.

There were an additional 51 variables related to teaching practices that were also analyzed.  Unfortunately, the analysis of these teaching practice variables were limited for a variety of reasons. Often the studies simply didn’t describe the teaching practice or that two or more variables were too related to determine which was the effective one (or how much each contributed to the effect). There was also the danger of turning qualitative

Some of the factors that were found to improve learning included:

  • It stimulates more active engagement
  • It includes prompts for learner reflection
  • Learners have an element of control over their interactions with the software

Factors that were found to have no effect:

  • Adding additional media not related to the content to be learned
  • Adding multiple-choice quizzes

It was at this point that Barbara ran out of time. Robert Bernard was the next to take over with his paper on “Advancing a Methodology for Comparing Alternative Distance Education Designs”. Robert began by describing the characteristics of systematic review – and there are a variety of tools to accomplish systematic review, one of which is meta-analysis. Between 2000 and today, there have been 15 meta-analyses conducted related to distance education and/or online learning. We know that this method of delivery can be as effect as classroom instruction – however, the quality of research that has been conducted is poor.

Robert then indicated that comparing distance education/online learning to classroom instruction is kind of like comparing cars to horses at the turn of the past century. These kinds of comparisons are useful at first, to make sure that it’ll do as good a job, but after a while they don’t make any sense.  This view is supported by others, for example:

“We need studies clarifying when to use e-learning (studies exploring strengths and weaknesses) and how to use it effectively (Head-to-head comparisons of e-learning interventions). (Cook, 2009).

He called for future studies to begin comparing different instructional transactions within the distance education/online learning environments. For example, comparing an online learning environment with another online learning environment based upon some instructional transaction (such as interaction). In fact, interaction was the focus of a study that Robert and his colleagues published in the Review of Educational Research from November 2009.

Robert and his colleagues looked at 74 studies that included designed interaction within the online learning environment:

  • 10 studies -> student-student (+0.49 effect size)
  • 44 studies -> student-teacher (+0.32 effect size)
  • 20 studies -> student-content (+0.46 effect size)
  • Overall – +0.38 effect size

The study was not able to capture the level of interaction of the quality of interaction; it only focused on the nature of interaction treatments (i.e., instructional arrangements intended to foster interaction).

Some of the implications Robert found included:

  • Increasing interaction of all kinds leads to higher achievement gains
  • Support the content, as interaction with the content saw the highest effect sizes
  • (and about three others that I missed)

Robert finished with a series of recommendations for future research – all of the common ones that you’ve come to expect.

Marianne was the third panelist, speaking on the paper “Using Educational Data Systems to Support Policy-Relevant Online Learning Research”. This was essentially a follow-up to Barbara’s study, using the quantitative data from the examination of meta-analysis to develop case studies focused on specific teaching practices.  Marianne began by stating that there were only 5 studies in Barbara’s work that were focused on K-12 online learning, which caused a change in the focus of this particular study to these two research questions:

  1. How do the characteristics of students taking online courses compare with those of students enrolled on convention courses?
  2. How does the effectiveness of online learning compare to face-to-face instruction when controlled for student characteristics?

The report from the study has been submitted to the Department of Education and is currently being reviewed (and should be published shortly).

The data was obtained from state and K-12 online learning program databases, and the factors that they considered in the work fell into four categories: student characteristics, course characteristics, teacher qualities, and student achievement.

Some of the results that will be included in this yet to be released report:

  • “States are the most likely to have most of the adequate data and enrollment sizes, but:
    • Often don’t track online status of a course
    • Don’t have adequate enrollment per course
  • Online learning providers have great data, but don’t have access to comparison group data
  • Private providers often don’t have unique student IDs that can be matched to other sources
  • Districts are uneven in availability of data, but even those with good systems don’t tend to have sufficient enrollments
  • No comprehensive source for district data system characteristics”

Marianne did mention that Florida was the only state that had sufficient data, and one mid-Atlantic school district also had acceptance data.

Some of the lessons learned included:

  • “Administrative databases need to do a better job of tracking interventions generally (and especially online)
  • Enrollment number can be misleading
  • Anytime, anywhere nature of online learning requires enrollment dates
  • High payoff is likely to be in states with end of course exams
  • Need complimentary research to examine student persistence, technical aptitude, and social supports all of which are likely to play a role in student achievement but won’t be in educational data systems
  • Beware many providers in one location, it may be difficult to interpret differential outcomes across providers post hoc”

Following Marianne, Cathy took over with her first paper: “Data for Understanding Virtual School Practice”. This first paper was focused on the Virtual School Clearinghouse project ( that is housed at the University of Florida. For those unfamiliar, the VSC is essentially a data warehouse that virtual schools can dump all of their data into and the folks at Florida are able to conduct any number of analyses to see what might be happening based on all of these variables (which include more than 100 data elements) and, also, explore why things are happening. At present, there are 20-some states involved in the VSC.

Cathy then outlined one example of the kind of report that could be generated for the virtual schools that participate in the clearinghouse. After outlining some of the problems with comparing a report from one virtual school to a report from another virtual school, Cathy outlined recommendations needed for future research:

  • “Uniformity is needed in the virtual schools community around basic definitions in order to learn across school environments
  • Gaps in virtual school data systems mean losses of knowledge
  • However, individual virtual school data systems can be models for K-12 education nationally because of the scope of data collected and ten-year history of informed practice based on the data”

Cathy than moved to her second paper, “Structure of Virtual School Data Systems for Informing Practice”, which focused on a two year evaluation from one large, statewide, K-12 public virtual school that has been in existence for only a few years (but had 19,000 course enrollments). It had both supplemental and full-time students.

The virtual school in question was able to link the records and information from the state system (demographic and performance data), school system (achievement and teacher data), and course system (LMS data).

The study focused on data from the 15 highest enrollment high school courses, which included 14,000 course enrollments, including:

  • Minutes logged in
  • Number of times logged in
  • Number of teacher comments
  • Participation in free or reduced lunch
  • Full-time or part-time
  • Ethnicity/race
  • Grade in physical school
  • Individual course grade
  • Support from their brick-and-mortar school

Tom and Cathy used hierarchical linear modeling to analyze the data.

Some of their findings included:

  • “Time students spent in the course was the most significant factor, influencing student grade significant in 11 of 15 courses, regardless of student demographic group
  • Students who spent more time in the course performed better academically
  • Guideline: User-friendly course design that motivates students to spend more time in the system engaged in academic activities.
  • Participant in free or reduced lunch programs had a significant effect b 5 courses
  • Students now participating in the lunch programs performed better than students participating in these programs
  • Guideline: Instructional strategies to support students from low-income families.
  • Student status in the virtual school had a significant effect in 5 courses, 3 of which were advanced math courses
  • Full-time online student performed better than part-time students
  • Guideline: Advanced courses should include time management skills or learning strategies for part time students to balance time between online and traditional learning environments.”

The discussant was unable to join the panel, so we then moved into the questions and answers.

May 3, 2010

VSM Podcast – AERA 2010: For Love, Not Money: Examining One Virtual School’s Course Development Process

VSM Podcast – AERA 2010: For Love, Not Money: Examining One Virtual School’s Course Development Process Introduction

This special edition AERA 2010 podcast is a recording of the third presentation that I made at the AERA 2010 conference, and was also the final Monday session related to K-12 online learning. It was described in the program as:

Scheduled Time: Mon, May 3 – 4:05pm – 5:35pm Building/Room: Sheraton Denver / Grand Ballroom Section 2
In Session Submission: Online and Virtual Learning

Authors: *Michael Kristopher Barbour (Wayne State University)
*Jim Kinsella (Illinois Virtual High School)

Abstract: As online learning continues to grow at the K-12 level, and as the range of students served broadens, more research is needed to examine the effective design and delivery of these learning opportunities. This case study explored the course development process of the Illinois Virtual High School to recommend improvements and specific design principles. Data collected using surveys and interviews indicated developers generally enjoyed their experience, but had mixed opinions about working in teams. Most developers were teachers, with little technical expertise, who were unable to take full advantage of the online medium. Finally, the absence of a standard template allowed developers freedom in their course design, but resulted in non-standardized online courses.

As always, the actual podcast is in the entry that immediately follows this one.

Note that this entry and the actual podcast are backdated to the time of the event.

AERA 2010 – Success In Online High School Biology: Factors Influencing Student Academic Performance

The third session on K-12 online learning that I had outlined for Monday at the AERA 2010 conference was:

Scheduled Time: Mon, May 3 – 2:15pm – 3:45pm Building/Room: Sheraton Denver / Grand Ballroom Section 2
In Session Submission: Innovations of Online Learning in K-12 Schools and Teacher Education

Success in Online High School Biology: Factors Influencing Student Academic Performance
Authors: *Feng Liu (University of Florida)
*Cathy Cavanaugh (University of Florida)

Abstract: This paper describes a study of success factors in Biology courses in K-12 virtual learning environment. Students completed Biology courses during 2007-2008 in a state virtual school in the Midwestern U.S region participated in this study. The influence of student demographic information and learning management system (LMS) utilization on final score in the tests administered at the end of learning process was investigated. Hierarchical linear modeling (HLM) method was used for data analysis to account for the clustering of students’ final scores within the same school. The results show these success factors influence final score in different ways. The discussion of the findings addresses the implications for teaching. Further research is proposed based on the results and limitations of this study.

The focus of this study was on the factors that affect K-12 student success in virtual school environments in science – specifically biology. One of the hypotheses the authors had was that these factors included student interaction within the course management system (e.g., number of times logged in, amount of time logged on, etc.), the level of teacher feedback, and student demographic information. These three areas were based upon the literature related to student success.

The data was based on student data collected from the Biology 1 and Biology 2 courses from a US Midwestern virtual school (approximately 200 students in the first course and about 100 in the second course). The data was analyzed using hierarchical linear modeling (HLM) to determine which factors were found to be relevant.

The results slides on the print-out were grayed out and the font was kind of small, but in the discussion, Feng mentioned that students who spent more time in the course management system performed better – based on the students’ final course score – than students who spend less time in the CMS.

Feng and his colleagues also found that a higher level of teacher feedback was positively correlated with the students’ performance.

Another significant factor that the authors found was related to the student demographics, specifically students who received a free or reduced lunch did not perform as well as students who were from higher socio-economic status backgrounds.

In looking ahead, Feng indicated that he would like to expand this line of inquiry to include additional science disciplines, and to begin to incorporate some qualitative data – specifically related to the teacher feedback.

Unlike the previous two sessions, I was not presenting during this time slot (my third roundtable isn’t until the next slot – and yes, I will endeavour to podcast that one too).

Next Page »

Blog at