Virtual School Meanderings

March 3, 2017

IRRODL Announcement 18.1

No K-12 distance, online or blended learning items here, but check out this open access research all the same.

IRRODL Issue 18(1)
Email not displaying correctly?
View it in your browser.
Dear Readers,

IRRODL is pleased to bring you this special issue on Advances in Research on Social Networking in Open and Distributed Learning.  We thank Miltiadis Lytras and Hassan Mathkour for their commitment to research and for their guidance of this issue.

Dianne

Vol 18, No 1 (2017)

Special Issue: Advances in Research on Social Networking in Open and Distributed Learning

Table of Contents

Editorial

Editorial – Volume 18, Issue 1
Miltiadis Lytras, Hassan Mathkour

Research Articles

Hagit Meishar-Tal, Efrat Pieterse
Yasemin Gülbahar, Christian Rapp, Selcan Kilis, Anna Sitnikova
Mariana de Lima, Marta Elena Zorrilla
Wenke Wang, Yen Chun Jim Wu, Chih-Hung Yuan, Hongxia Xiong, Wan-Ju Liu
Gürhan Durak
Maria Macià, Iolanda García
Miroslava Raspopovic, Svetlana Cvetanovic, Ivana Medan, Danijela Ljubojevic
Andrés García-Floriano, Ángel Ferreira-Santiago, Cornelio Yáñez-Márquez, Oscar Camacho-Nieto, Mario Aldape-Pérez, Yenny Villuendas-Rey
Sergio Cerón-Figueroa, Itzamá López-Yáñez, Yenny Villuendas-Rey, Oscar Camacho-Nieto, Mario Aldape-Pérez, Cornelio Yáñez-Márquez
Wadee S. Alhalabi, Mobeen Alhalabi
Chien-wen Shen, Chin-Jin Kuo, Pham Thi Minh Ly
Higinio Mora, Antonio Ferrández, David Gil, Jesús Peral
Adolfo Ruiz-Calleja, Juan Ignacio Asensio-Pérez, Guillermo Vega-Gorgojo, Eduardo Gómez-Sánchez, Miguel Luis Bote-Lorenzo, Carlos Alario-Hoyos
Marcelo Careaga Butter, Eduardo Meyer Aguilera, Maria Graciela Badilla Quintana, Laura Jiménez Pérez, Eileen Sepúlveda Valenzuela
You are receiving this email because you subscribed to the International Review of Research in Open and Distributed Learning.
Our mailing address is:
Athabasca University
International Review of Research in Open and Distributed Learning (IRRODL)
1 University Drive
Athabasca, AB  T9S 3A3
Canada

February 23, 2017

[JAEPR] New Issue Published

Nothing specific to K-12 distance, online and/or blended learning in this issue.  However, some interesting open access articles related to misconceptions around certain aspects of cognitive science (among other topics).

Readers:

Journal of Applied Educational and Policy Research has just published its
latest issue at https://journals.uncc.edu/jaepr. We invite you to review the
Table of Contents here and then visit our web site to review articles and
items of interest.

Thanks for the continuing interest in our work,
Missy Butts
University of North Carolina at Charlotte
cbutts4@uncc.edu

Journal of Applied Educational and Policy Research
Vol 3, No 1
Table of Contents
https://journals.uncc.edu/jaepr/issue/view/76

Contents
——–
Table of Contents
Missy Butts

Editorial
——–
Wonder-Ful Questions:  Introduction to a Response to the DFI Report
Rebecca Shore

Understanding of New Ideas
——–
How Do Students Understand New Ideas?  In Response to the Deans for Impact
Report (DFI)
Jacob Boula,    Kristina Morgan,        CarieAnn Morrissey,     Rebecca Shore

Application of the Deans for Impact Report, The Science of Learning How Do
Students Understand New Ideas?
CarieAnn Morrissey,     Jacob Boula,    Kristina Morgan

Transfer of Information
——–
How do Students Learn and Retain New Information? A Response to the Deans
for Impact Report, The Science of Learning
Brain Spaulding,        Dru Thomas,     Charles Yearta, Aimee Miller

How do Students Learn and Retain New Information? A Practical Application of
the Deans for Impact Report, The Science of Learning
Brian Spaulding,        Dru Thomas,     Charles Yearta, Aimee Miller,   Rebecca Shore

Problem Solving
——–
How Do Students Solve Problems? In Response to the Deans for Impact Report,
The Science of Learning
Yolanda Kennedy,        Tracey Carney,  Joey Moree

In Response to The Deans for Impact Report: How Do Students Solve Problems?
Practical Applications for Educators
Tracey Carney,  Joey Moree,     Yolanda Kennedy

Application of Knowledge
——–
The Science of Learning: Transferring Learning to Novel Problems
Richard Wells,  Thanh Le

The Science of Learning: Practical Applications for Transferring Learning in
the K-12 and Higher Education Settings
Richard Wells,  Thanh Le

Motivation to Learn
——–
What Motivates Students to Learn? Exploring the Research on Motivation
Amber Perrell,  Julia Erdie,    Theresa Kasay

What Motivates Students to Learn? Applications for All Classroom Levels
Amber Perrell,  Julia Erdie,    Theresa Kasay

Misconceptions
——–
Myths or Misnomers: Research-based Realities in the Classroom Literature
Review for Deans for Impact (2015)
Maria Leahy,    Rebecca Shore,  Richard Lambert

Teachers Can Untangle the Truth from Myth in the Classroom: Using an
Interdisciplinary Approach to “Developing the Brain.”  An Application of
Deans for Impact (2015)
Maria Leahy,    Rebecca Shore,  Richard Lambert

Conclusion
——–
Cognitive Science and Educational Research: A Shotgun Courtship
Richard Lambert

Left Brains, Learning Styles, and Who Cares What Time it is?
Rebecca Shore

________________________________________________________________________
Journal of Applied Educational and Policy Research
https://journals.uncc.edu/index.php/jaepr

Article Notice – Preparing Special Educators for the K–12 Online Learning Environment: A Survey of Teacher Educators

As I indicated yesterday in the Journal of Special Education Technology – Special Issue: Emerging Practices in K-12 Online Learning: Implications for Students with Disabilities entry, I’m posting the article notices from this special issue this week.

Pioneering research studies in teacher preparation in online settings have taken place, yet little to no work has been done specifically focused on teacher preparation for special education and learners with disabilities. In the present study, researchers from the Center on Online Learning and Students with Disabilities conducted a web-based survey of special education teacher preparation faculty to determine the level to which they were attending to online education preparation. The survey was developed with a specific alignment to the International Association for K–12 Online Learning (iNACOL) online teacher standards. The results of this survey pinpoint several areas of need in the preparation of teachers who are will be working in online education and attending to students with disabilities in these settings.

Online learning, where instruction is provided (to varying degrees) over the Internet, is increasingly viewed as a viable means to providing education to K–12 students. The Christensen Institute for Disruptive Innovation predicts that at least half of all high school courses will be delivered online by 2019 (Horn & Staker, 2011), and several states are requiring online learning experiences (Evergreen Education Group, 2015). Under these circumstances, it is inevitable that students with many different kinds of disabilities are also entering online learning spaces (Basham, Stahl, Ortiz, Rice, & Smith, 2015; Evergreen Education Group, 2015).

Since the creation of the federally funded Center on Online Learning and Students with Disabilities, there has been an interest in what teacher work looks like in online settings and whether teachers come into these settings prepared for the realities of working with students, managing programs and devices, and interpreting the data generated from completed assignments. To answer these questions, several research studies within the Center have been conducted, which have further highlighted the need to support teachers in their work with students in online educational settings (e.g., Rice & Carter, 2015a, 2015b; see also Basham et al., 2015, for descriptions of studies). As the practice of online learning continues to expand, more targeted research that specifically addresses teachers of students with disabilities is needed. Minimally, it is important for the field of special education to be aware of the newly emerging issues associated with teacher preparation and online learning.

Teacher Competencies in Online Learning

While a great deal of research has focused on defining teacher quality in traditional settings, little is known about what constitutes teacher quality in virtual schools (Huerta & Shafer, 2015). In an example of early work, DiPietro, Ferdig, Black, and Preston (2008) sought to uncover online best practices in the Michigan Virtual School after Michigan became the first state to mandate virtual learning experiences as a graduation requirement in 2006. The researchers invited 16 fully certified Michigan Virtual School teachers with at least 3 years of experience to participate in their study of pedagogical practices. From the analysis of their interview data, 12 general characteristics, 2 classroom management strategies, and 23 pedagogical strategies emerged. These findings focused on the need for teachers to learn to develop curriculum and assessment using the online resources rather than traditional ones, strategies for dealing with student behavior when students interact asynchronously, and technological skills around troubleshooting and sharing technological skills with others.

When Florida and other states began requiring online learning for all students, Cavanaugh, Gillan, Bosnick, and Hess (2008) investigated instruction in online algebra courses structured to allow both younger students, trying to accelerate their learning through an online course, and older students doing credit recovery to enroll. They found that teachers were generally unprepared to provide differentiated instruction to the range of learners enrolled in these courses. This finding drew attention to the notion that online teachers need to be prepared to meet the learning needs of both accelerated students along and those seeking to recover credit, especially since online teachers are often not involved in deciding how their classes are structured.

For students with disabilities, Greer, Rowland, and Smith (2014) asserted online instruction requires teachers with excellent communication skills, allowing teachers to interact with both students and adults in the home in varied formats (e.g., e-mail, written directions, phone calls, and periodic synchronous videoconferences or chats) because students with disabilities are often in need of more and clearer communication. Moreover, Rice and Carter (2015a) found that online teachers of students with disabilities value relationships with students as a primary means for decision-making and they build these relationships by constant monitoring and contact as well as by listening to learners’ personal and family stories of hardship. What was different about this listening orientation is that it was done both synchronously and asynchronously and the stories often came in small pieces the teachers had to push together, rather than as a continuous narrative that might emerge in a regular classroom.

In addition, the role of the online teacher who is serving students with disabilities requires unique skills to ensure that necessary instructional, legal, and ethical demands of special education are upheld at professional levels within an online school setting (Basham et al., 2015; Rice & Carter, 2015b). As a very recent example, Carter and Rice (2016, this issue) found that in practice, administrators had only emerging understandings about how to help teachers use technology to support student learning, especially if doing so required the use of any system beyond what was already in place. Since specific teacher preparation for online instruction is crucial to the success of all students, it is necessary to understand how teachers are being prepared to work in online environments (Basham et al., 2015).

Online Teacher Preparation

Teachers need certain kinds of new skills for online learning, but what has been done to prepare them to develop these competencies? In the chapter Teacher Preparation for K–12 Online and Blended Learning, Archambault and Kennedy (2014) reviewed the preparation of preservice K–12 online teachers, suggested areas for future research activities, and shared ideas for policy practice. Their review of research indicated that teacher preparation programs should provide preservice teachers with the skills needed to become successful online teachers, whereas in-service teachers could benefit from professional development training on online education.

Archambault and Kennedy’s (2014) chapter drew attention to a previous call for a closer alignment between teacher education programs and practicum experiences that provide real opportunities to grapple with the changing demands of blended and fully online K–12 learning. For example, Irvine, Mappin, and Code (2003) provided a very early call to the field for direction in teacher preparation, laying the foundation to subsequent program revisions. This early work was also critical in the eventual development of teacher education guidelines for preparing K–12 teachers for blended and fully online classrooms (International Association for K–12 Online Learning [iNACOL], 2011). (These guidelines, referred to as online teacher standards, are discussed in more detail later in this article.)

As teacher education programs continued to develop curriculum and practicum experiences for the online instructional experience, their efforts began to appear in the teacher education literature. For example, Iowa State University (ISU) made early efforts to provide and evaluate field experiences in online learning (Davis & Roblyer, 2005). ISU gathered data from these preservice/in-service partnerships for one study. Findings suggested that the preservice teachers were able to articulate new understandings of the basic attributes of teaching in an online environment. They also began to process the implications of planning and facilitating instruction for student learning.

In addition, Kennedy and Archambault (2012) conducted a national survey (all 50 states) of administrators, faculty, and staff in teacher education programs to examine alternative field experiences in virtual schools. The survey sought to understand how—or if—teacher preparation programs required or recommended that teachers have practical experiences working with an online teacher in the virtual classroom. Five hundred twenty-two responses, representing a 34% response rate, were collected. The majority of the respondents, 77% (n = 404), indicated that they did not offer such experiences, 21.3% (n = 109) answered that they did. Upon further examination of responses, including actual descriptions of the virtual school practicums, only 1.3%, (n = 7) reported partnering with a K–12 online learning program and, were able to share what was required of the preservice teachers during this placement.

Teaching Standards for Online Learning

As a research base started to develop around teacher work in online environments, this informed teacher preparation, and then attention turned to standards as a way to codify the skills necessary for online teaching to be successful. Ultimately, this research led to the development of iNACOL’s National Standards for Quality Online Teaching (see Table 1). These 11 standards were designed to provide state, district, institutions of higher education teacher preparation programs, and online K–12 programs with a set of guidelines for consideration in the development of teachers for the K–12 online classroom. Each of the 11 standards was structured to identify effective knowledge and understanding as well as the ability to implement the skills in the K–12 online classroom. The standards were meant to guide teachers through design, planning, strategy integration, and similar competencies that promote active and engaged learning.

Table

Table 1. International Association for K–12 Online Learning Quality Standards for Online Teachers.

Table 1. International Association for K–12 Online Learning Quality Standards for Online Teachers.

Spanning a broad knowledge and skills base, the iNACOL Quality Online Teaching Standards (2011) do not distinguish between elementary, middle, or secondary grades, or between general and special education. Instead, each standard attempts to identify knowledge and understanding for a specific area (e.g., setting clear expectations) followed by what the skill would look like in practice (e.g., be able to effectively communicate with students). Practical in nature, the standards serve as a guide or series of indicators from which to envision effective online teaching. By identifying knowledge and skills, state and district leaders are able to develop mechanisms with which to identify the appropriate professional for the growing online classroom. Likewise, teacher preparation programs and ongoing professional learning experiences can further foster the development of critical skills to successfully meet the needs of the online learner.

Finally, while these guidelines are referred to as “standards,” iNACOL is not an accreditation body. The standards are written for districts and organizations (and potentially universities) to reflect on their individual efforts to support the implementation of the teaching standards. Specifically, each standard indicator is associated with teacher knowledge and understandings as well as teacher abilities. The standards are then associated with a self-reflective rating system:

  1. 0 Absent—component is missing
  2. 1 Unsatisfactory—needs significant improvement
  3. 2 Somewhat satisfactory—needs targeted improvements
  4. 3 Satisfactory—discretionary improvement needed
  5. 4 Very satisfactory—no improvement needed

For students with disabilities, iNACOL standards should be considered within the context of a teacher preparation experience that is also aligned with the Council for Exceptional Children (CEC) standards, so that teachers develop the knowledge, skills, and dispositions to work effectively with students.

In order to help teacher educators and other stakeholders, such as school districts, make sense of the standards, Archambault and Kennedy (2012) examined three relevant teacher education guidelines and standards including those from iNACOL, the National Education Association, and the Southern Regional Education Board. The authors created a crosswalk of the necessary skills for teaching in the online classroom that fit into several categories: qualifications, professional development, and credentials; curriculum, instruction, and student achievement; online pedagogy; ethics of online teaching; communication/interaction; assessment and evaluation; feedback; accommodations and diversity; management; technological knowledge; and design. Ultimately, the crosswalk offered teacher educators and relevant accreditation entities a map of the necessary knowledge, skills, and dispositions teachers need in order to be successful in the K–12 online environment. Archambault and Kennedy used their crosswalk to further argue that online teacher preparation should align with standards for online teaching and recommended that preservice and in-service teachers should work with cooperating online teachers to model their best practices in the online classroom.

While this work was ongoing, other standards-making bodies, such as the CEC, were making revisions to their standards, but were not including online learning standards in their changes. Within special education, CEC professional standards are used by accreditation agencies, other professional organizations, and state education agencies to guide and further develop practice guidelines for the field. Thus, teacher preparation programs use CEC standards to determine what coursework and field experiences are critical for preservice teachers. The preparation programs then identify experiences, requirements, and outcomes based on these standards. While other professional standards may be applicable (e.g., specific state teacher education standards), CEC standards have a direct influence on special education teacher pre- and in-service development across the country (Scott, Gentry, & Phillips, 2014). One can see the conundrum that has emerged. The online learning standards operate as guidance only and the accrediting standards do not address online learning.

As online teacher preparation becomes increasingly anchored to these standards, the issue of preparation to teach students with disabilities lingers. With changes taking place in the education system, it is becoming increasingly important to better understand the standards for supporting the preparation of special education teachers relative to online learning. Given this growing need, researchers from the Center developed an initial study to measure the alignment of special education preservice teacher education to the iNACOL standards. Specifically, this study sought to answer the following questions:

  • How much exposure are special education preservice teachers receiving to K–12 online education principles?
  • How well are special education teacher preparation programs aligning to the iNACOL standards?

Participating Teacher Educators

Sixty-four special education faculty from the Higher Education Consortium for Special Education (HECSE) member institutions were recruited to complete an online survey concerning current efforts to prepare teachers for the K–12 online learning environment. Of the 64 recruited, 48 completed the survey for a 75% return rate. Faculty members were recruited through the HECSE Board where Center staff sought a representative from each of the 64 HECSE institutions to complete the survey. Each faculty member was asked to complete the survey from their perspective taking into consideration efforts underway across the special education teacher education program at their specific institution.

Table 2 offers a breakdown of the demographics of the 48 faculty participants with over 77% of the responding faculty teaching in the high-incidence disability area, nearly 17% in the low incidence area, and the remaining faculty associating with early childhood special education. With students with high-incidence disabilities making up 80% of all students with disabilities (U.S. Department of Education, 2016), we felt that the program area was somewhat representative of the typical breakdown between the high- and low-incidence areas realizing early childhood may be underrepresented based on national data (U.S. Department of Education, 2016).

Table

Table 2. Respondent Demographics.

Table 2. Respondent Demographics.

Realizing that HECSE members might be preparing professionals for the special and general education classroom, dependent upon their department structure or state licensure/endorsement requirements, our criteria for survey completion provided specific directions. That is, teacher educators were asked to complete the items based on coursework and teacher preparation requirements affiliated with licensure/endorsement obligations for special education.

Our demographic data also sought to understand years of teaching in higher education, age, gender, and previous experience in either attending or teaching a fully online or blended course in higher education. Table 2 shows that 36.1%, the largest block of teachers, had over 20 years of teacher experience in higher education while 25.5% were on the other end of the continuum with 1–5 years. In addition, 75% of the teacher educators were female; nearly 44% were over 55 years of age. Over half of all respondents (56.5%) had taken an online course and a significant percentage had taught either a fully online (61.7%) or blended (68.1%) teacher preparation course.

Measure Development

A one page, web-based survey developed specifically for the study was the primary source of data in this research. Items were developed based on a review of the iNACOL Quality Online Teaching Standards. Formed in 2003, iNACOL has become the primary voice in leading K–12 fully online and blended planning through the production of policy papers, offering forums for sharing knowledge (e.g., an annual conference), and developing national quality standards across a variety of issues targeting blended and fully online K–12 learning. In 2011, iNACOL convened a group of experts to refresh and produce the second version of the National Standards for Quality Online Teaching (see http://www.inacol.org/resource/inacol-national-standards-for-quality-online-teaching-v2/). The standards were designed to provide states, districts, online programs, and institutions of higher education with a set of quality guidelines for what is needed to be effective in online instruction for the K–12 student. Each of the 11 standards is structured on two primary indicators that include teacher knowledge and understanding and teacher abilities. Each indicator includes a rating scale for teacher educators, district personnel, and others to use in order to identify whether this indicator is absent, unsatisfactory, somewhat satisfactory, satisfactory, or very satisfactory.

Center researchers reviewed the 11 standards and identified four overarching constructs which we believed to represent the essential foci of the standards and their specific relationship to the need of students with disabilities and the unique qualities of the blended and fully online classroom. These constructs were constructed through a four-step process. First, Center researchers reviewed the iNACOL standards and compared them with the current CEC standards. iNACOL standards and indicators that aligned with CEC standards were separated for further review. The final step was to determine how to structure the remaining iNACOL standards and indicators. From these remaining elements, researchers identified constructs that represented primary themes of the standards and recommended components of a teacher preparation experiences that foster skill development on the part of preservice teachers, specific to the blended and fully online K–12 classroom. The constructs were then organized into four areas that included (a) establishing competence in using technology tools for the online classroom, (b) developing and integrating coursework experiences, (c) developing and implementing assessments that could be administered online, and (d) offering learning experiences that further promote K–12 online learning.

After defining and identifying the four constructs, Center researchers worked to develop corresponding items for each of the constructs. Item development included review by Center researchers and colleagues in special education teacher education. A total of 21 survey items were developed with each construct containing 4–6 items that more specifically reflected the content category (for the list of items, see Table 3). Questions pertaining to participant demographics were added for a total of 25 individual items for the web-based survey. Using Qualtrics Version 12.018 (Qualtrics Labs, 2012), the survey was transferred to an online format to ensure easy access for respondents.

Table

Table 3. Teacher Education Survey on K–12 Online Preparation and Corresponding Responses.

Table 3. Teacher Education Survey on K–12 Online Preparation and Corresponding Responses.

After researchers developed this initial exploratory survey for identifying teacher educators’ perceptions on how well they were integrating core knowledge, skills, and practices of K–12 online learning into their coursework as well as related practicum or field-based experiences, field testing was conducted in the form of a pilot study. The survey was sent to a convenience sample of 33 special education teacher educators across the country, targeting colleagues who taught at large teacher education institutions. Of the 33 contacted, 20 completed the survey with comments offered as part of the field-testing work. Feedback prompted the authors to revise item wording and restructure the Likert-type scale. As a result, the items affiliated with the iNACOL standards include a 4-point Likert-type scale in order to assess the extent to which teacher education faculty addressed the K–12 blended and fully online classroom (1 = not at all, 4 = 3 times or more) in assessment, instruction, and experience. Field testing also resulted in the addition of language specific to coursework or internship/practicum experiences. Researchers focused on frequency of experiences to determine the degree to which the practice was part of the teacher education program and, if so, the extent of the experience. The final items appear in Table 3.

Survey Administration Procedure

Participants were recruited from the 64-member HECSE organization in two distinct steps. First, a member of HECSE was asked to present an overview of the study at an HECSE Winter Summit held in Washington, DC. There, members were offered a brief introduction to the study and informed that a member of the Center on Online Learning and Students with Disabilities would be contacting them shortly after the meeting to determine interest in participating in the study and also identifying which faculty at their institution would be best prepared to completed the online survey.

Next, researchers with the Center called or e-mailed the HECSE representative to determine the institution’s interest in participating and which of their special education faculty would be best to complete the survey. Once an individual was identified, Center researchers sent an e-mail that provided an introduction to the study, a link to the survey, and basic directions on next steps including suggested time frame for survey completion. A unique web address was developed for each online survey allowing for Center researchers to determine the institution affiliated with the survey completion. Follow-up e-mail reminders were provided 2 and 4 weeks after the initial invitation to participate. In four instances, researchers called a participant in order to answer a question on who should complete the survey or a technical issue in access and completing within the suggested time frame.

The web address for the Qualtrics-based survey was sent directly to each HECSE representative within 1 week of their confirmation to participate. Besides the 2- and 4-week reminders, participants received an automatic thank-you message after completing the online survey as well as 1 week after via an e-mail from one of the Center’s researchers. Forty-eight surveys were completed for a return rate of 75% across 64 HECSE member institutions initially contacted and engaged as part of this study.

Data Analysis

Data were analyzed using the percentages reported in by the responding teacher educators and a ranking of the means for the various question items. Question items were then grouped loosely into seven categories: (1) technology use for student engagement, (2) instruction and feedback, (3) instructional design, (4) assessment design, (5) legalities and safety, (6) standards-based teaching, and (7) professionalism. In looking at these categories, several patterns emerged which are reported in the “Findings” section.

Table 3 provides a summary of the faculty ratings related to competence in using and integrating technology on the part of the faculty and usage on the part of the preservice teacher education student being addressed in their current coursework. Using the 4-point scale with 1 = not at all, 2 = once, 3 = 2–3 times, and 4 = more than 3 times, faculty reported means ratings as high as 3.42 and as low as 1.44.

Teacher educators self-reported several strengths. One strength was around using established technologies to support student engagement, with 60.4% of teacher educators saying that they addressed this issue more than 3 times in a course. A second strength was reported around anticipation of new and emerging technologies, with almost half (47.9%) reporting addressing this 3 or more times in their courses.

However, a rather large majority of teacher educators also indicated a number of areas where they did not address issues concerning K–12 online instruction at all. These items were discussing legal issues for the K–12 online learning experience (72.9%), creating assessments for the online format (75%), creating assessments that are statistically valid for the online environment (68.1%), aligning online curriculum to K–12 content standards (63.8%), implementing online assessment (67.4%), modifying assessments based on student learning data (68.1%), and arranging materials to promote the transfer of learning in a K–12 online learning environment (60.4%). In summary, the teacher educators reported an emphasis on technology use for student engagement, but they have been unable to implement instructional design and assessment elements toward blended or fully online into their courses as of yet.

The items that teacher educators reported including at least partially in their coursework 1–3 times included giving explicit instruction to students with disabilities in online settings (51.7%), giving instructional support to students with disabilities in online settings (48.6%), providing feedback to students using online tools (48.9%), holding conversations with students about Internet safety (48%), interacting professionally with colleagues (68.1%), and interacting with parents (47.1%). These percentages suggest some interest in addressing instruction in online settings more fully.

The purpose of this study was to assess how much exposure to K–12 online education special education preservice teachers receive in their special education teacher preparation programs, and how well special education teacher preparation programs are aligning to the iNACOL Quality Online Teaching Standards. The survey asked respondents, drawn from special education teacher education programs, to identify the number of times special education preservice teachers were exposed to the knowledge and skills associated with the iNACOL Quality Online Teaching Standards. Overall, the results of this survey indicated that teacher educators are willing to include the use of existent and emerging technologies in their practice and that they cover many of the topics outlined in the iNACOL standards at least once, especially those that involve the direct interaction with students, parents, and colleagues. However, a majority of the teacher educators who participated in this survey also reported that they never addressed a number of critical topics related to instructional and curriculum design, and assessment, especially when the assessments involved using student data.

The findings of this survey in relationship to the growth of online learning are important because they suggest a critical need for online education to be better integrated into special education teacher programs on these critical issues of instruction and assessment, as these are elements that contribute to student learning. However, it is unsurprising that the teacher educators would not know how to do these things because instructional and assessment design online likely requires different skills than doing so off-line and models for well-designed courses and assessments are likely to be scarce in the newness of “K–12” online learning. What the teacher educators were able to prioritize was getting teachers to interact with children and other stakeholders to provide support and hold conversations about their safety on the Internet and the general use of the Internet for instructional application.

Opening Eyes to Online Education

Recent years have seen the continued exponential growth of online learning, including fully online, blended, and personalized learning (Evergreen Education Group, 2015). In fact, the most recent Evergreen report (considered the most important annual metric of the field) notes that there is some form of online learning taking place in nearly every district across the United States. This means that, at any given moment, millions of K–12 students are working through online courses: included in those millions are students with disabilities.

This study indicated that teacher educators are able to consider elements of the iNACOL standards (nonaccrediting), but it is likely that the CEC standards (silent on online learning), which are the accrediting standards, are still taking precedent in program design. Even though teacher educators may be aware of things like legal issues in developing and implementing Individualized Education Plan documents in the brick-and-mortar setting, they have not been able to determine how to address this in the online setting. This finding is what Carter and Rice (2016) found among the online teachers who were working with special education students. But the relationship building and the need to seek out a child and provide particular help is still at the forefront of teacher education work in preparing teachers to work with students.

Therefore, there are at least three critical suggestions grounded in the findings of this study. First, teacher education departments and other advocates of online learning within institutions of higher education should offer to collaborate with special education teacher educators and lend their support, especially for skills like instructional design and assessment. Through these collaborative efforts, both parties would learn from the other.

Second, there is a need for accrediting bodies, including (although certainly not limited to) the CEC, to fully appreciate online learning and to use iNACOL standards to facilitate the incorporate online learning issues in teacher preparation and teacher quality evaluation, especially where students with disabilities are concerned. This is particularly critical from a legal standpoint, where online learning, being offered as one of the instructional options for a local school district, is then considered a type of placement. Students with disabilities have to be included, and they have to receive those legally protected services that are directly related to instruction as well as those which allow them to more fully derive educational benefit from instruction. In the evaluation of programs, students with disabilities need to be identified and their achievement needs to be monitored as a large group and by disability category in order to determine if what teachers are doing is working. When it is found that these students are not achieving, evaluating bodies should delve deeper as to why, and schools should address the issues and continue to monitor students.

Researchers were not surprised to learn that online learning, particularly through technologies that support student engagement, were included in special education teacher preparation, given the ongoing focus on technology integration in teacher preparation as well as the emerging status of online teacher preparation and the lack of attention to online learning in special education standards. What is surprising is that so many special education teacher educators were willing to complete the survey when it should have been clear from the survey title that they would not be able to give favorable responses to all the questions. As researchers, we viewed this as a testament to the interest that respondents have in learning about and preparing teachers for new roles and responsibilities. This is especially important because of the large number of senior faculty who responded, suggesting that they are sensitive to online learning as a trend and would like to know what they can do and how they can help. Hopefully the findings of this survey will bring attention to this issue and also support teacher educators—in learning how to give students with disabilities the choice and the chance to be successful at learning online.

Considerations for Future Research

Given the implications of these findings, additional research is needed. Special education teacher educators should be asked more about their work; especially important is working with those who were able to report that they were making some progress in including online learning preparation in their courses. How do they do this? What are their struggles? How do they learn about trends in online learning, the field in general, as well as stay abreast of research in special education? Finally, there should be additional research among standards making bodies—both the accreditation and nonaccreditation groups—to learn about their interest in and efforts to include online learning and in thinking about student diversity in general as they move forward. With these goals in mind, the Center moves forward in its investigations.

Limitations

It should be noted that the findings of this research represent an initial research study and analysis of the findings. The considerations and complexities in teacher preparation should not be overlooked and caution should be exercised when interpreting the findings of a single study and its potential impact on the field. Moreover, while this survey was distributed to institutions of higher education that belong to HECSE, these institutions are only a representative sample of special education teacher preparation.

With online education taking place in nearly every district across the country (Evergreen Education Group, 2015), it is necessary for the field of special education to consider how special education preservice teachers are being prepared to work in these new learning environments. Readers are encouraged to read and reflect on the teacher preparation implications highlighted in this article and the other articles in this topical issue. Clearly, the field of practice is changing; those on the frontlines of teacher education should be cognizant of these changes in the preparation of teachers and the potential impact on students with disabilities and their families.

Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The contents of this article were developed under a grant from the U.S. Department of Education (no. H327U110011). However, those contents do not necessarily represent the policy of the U.S. Department of Education and you should not assume endorsement by the Federal Government. Project officer is Celia Rosenquist.

Archambault L., Kennedy K. (2012). Situated online: Theoretical underpinnings of field experiences in virtual school settings. In Maddux C. (Ed.), Research highlights in technology and teacher education (pp. 5360). Chesapeake, VA: Society for Information Technology & Teacher Education. Google Scholar
Archambault L., Kennedy K. (2014) Teacher preparation for K-12 online and blended learning. In Ferdig R. E., Kennedy K. (Eds.), Handbook of research on K-12 online learning (pp. 225244). Pittsburgh, PA: ETC Press.Google Scholar
Basham J. D., Stahl S., Ortiz K., Rice M. F., Smith S. (2015). Equity matters: Digital & online learning for students with disabilities. Lawrence, KS: Center on Online Learning and Students with Disabilities. Google Scholar
Carter R. A.Jr., Rice M. F. (2016). Administrator work in leveraging technologies for students with disabilities in online coursework. Journal of Special Education Technology, 31, 137146. Google Scholar Abstract
Cavanaugh C., Gillan K. J., Bosnick J., Hess M., Scott H. (2008). Effectiveness of interactive online algebra learning tools. Journal of Educational Computing Research, 38(1), 6795. Google Scholar Abstract
Davis N. E., Roblyer M. D. (2005). Preparing teachers for the “Schools that technology built” Evaluation of a program to Train teachers for virtual schooling. Journal of Research on Technology in Education, 37(4), 399409. Google Scholar
DiPietro M., Ferdig R., Black E., Preston M. (2008). Best practices in teaching K-12 online: Lessons learned from Michigan Virtual School teachers. Journal of Interactive Learning, 7, 1035. Google Scholar
Evergreen Education Group. (2015). Keeping Pace with K-12 Digital Learning. Retrieved from http://www.kpk12.com/wp-content/uploads/Evergreen_KeepingPace_2015.pdf Google Scholar
Greer D., Rowland A. L., Smith S. J. (2014). Critical considerations for teaching students with disabilities in online environments. Teaching Exceptional Children, 46, 7591. Retrieved from http://doi.org/10.1177/0040059914528105 Google Scholar
Horn M. B., Staker H. (2011). The rise of K-12 blended learning. San Mateo, CA: Innosight Institute. Retrieved from http://www.christenseninstitute.org/publications/the-rise-of-k-12-blended-learning/ Google Scholar
Huerta L., Shafer S. R., Barbour M. K., Miron G., Gulosino C. (2015). Virtual schools in the US 2015: Politics, performance, policy, and research evidence. Washington, DC: National Education Policy Center. Google Scholar
International Association for K-12 Online Learning. (2011). National standards for quality online teaching. Vienna, VA: Author. Retrieved from http://www.inacol.org/wp-content/uploads/2015/02/national-standards-for-quality-online-teaching-v2.pdf Google Scholar
Irvine V., Mappin D., Code J. (2003). Preparing teachers to teach online: The role of faculties of education. In Lassner D., McNaught C. (Eds.), Proceedings of EdMedia: world conference on educational media and technology 2003 (pp. 19781981). Association for the Advancement of Computing in Education (AACE). Google Scholar
Kennedy K., Archambault L. (2012). Offering preservice teachers field experiences in K-12 online learning: A national survey of teacher education programs. Journal of Teacher Education, 63, 185200. Retrieved from http://doi.org/10.1177/0022487111433651 Google Scholar
Qualtrics Labs (2012). Qualtrics (Version 12,018). Provo, UT: Author. Google Scholar
Rice M., Carter R. A.Jr. (2015a). When we talk about compliance it’s because we lived it: Online educators’ experiences supporting students with disabilities. Online Learning Journal, 19, 1836. Google Scholar
Rice M., Carter R. A.Jr. (2015b). With new eyes: Online teachers’ sacred stories of students with disabilities. In Rice M. (Ed.) Exploring pedagogies for diverse learners online (pp.205226). Bingley, UK: Emerald Group Publishing. Google Scholar
Scott L. A., Gentry R., Phillips M. (2014). Making preservice teachers better: Examining the impact of a practicum in a teacher preparation program. Educational Research and Reviews, 9, 294. Retrieved from http://doi.org/10.5897/ERR2014.1748 Google Scholar
U.S. Department of Education. (2016). Thirty-seventh annual report to congress on the Implementation of the Individuals with Disabilities Education Act, Parts B and C. Retrieved from http://www2.ed.gov/about/reports/annual/osep/2015/parts-b-c/index.html Google Scholar

Article Notice – Reading Achievement and Reading Efficacy Changes for Middle School Students With Disabilities Through Blended Learning Instruction

As I indicated yesterday in the Journal of Special Education Technology – Special Issue: Emerging Practices in K-12 Online Learning: Implications for Students with Disabilities entry, I’m posting the article notices from this special issue this week.

 

This study evaluated the effects of a blended learning instructional experience for sixth-grade students in an English/language arts (ELA) course. Students at two treatment schools participated in a blended learning instructional paradigm, and their ELA test scores were compared to one comparison school that used a face-to-face delivery. Other variables of interest were gender status, disability status, and student reading efficacy. The results of the analysis indicated that no significant changes in reading achievement were found that could be attributed solely to treatment versus comparison, to gender, or to disability status. Perhaps of greater significance to practitioners and researchers is the identification of person and programmatic-level factors that influence adoption and implementation of effective blended instruction. Implications are discussed.

Online education is growing rapidly: Between 2002 and 2011, the number of K-12 students enrolled in either partial or fully online schools increased from 220,000 to 1.8 million (Watson, Murin, Vashaw, Gemin, & Rapp, 2012). A review of the research literature did not identify the total number of students with disabilities enrolled in some form of online learning, but research in the state of Ohio indicates that students with disabilities may be overrepresented in online learning (Wang & Decker, 2014). Research is needed to determine the impact that online school programs have on the learning, achievement, and long-term outcomes of students with disabilities.

Online learning consists of two broad categories: blended and fully online use of computer instruction. Multiple definitions of “blended learning” exist. In the present study, the term follows Staker and Horn’s (2012) definition:

A formal education program in which a student learns at least in part through online delivery of content and instruction with some element of student control over time, place, path, and/or pace and at least in part at a supervised brick-and-mortar location away from home. (p. 3)

Staker and Horn differentiate between four models of blended learning: flex, self-blended, enriched virtual, and rotation. In a flex model, learning is customized to student needs and students move to different modalities as their individual needs require. In a self-blended model, students take online courses which supplement their existing traditional schooling. In an enriched-virtual model, students divide their time between the brick-and-mortar school and learning remotely. In a rotation model of blended learning, students rotate learning modalities throughout the week or day. Four implementations of the rotation model are practiced: station rotation, lab rotation, flipped classroom, and individual rotation. Station rotation involves students moving from one station to the next in the same classroom to learn different subjects. Flipped classrooms involve students viewing lectures remotely, then coming to school to practice and do work. Individual rotation is similar to station rotation except that individual students are rotated to specific stations based on the student’s learning needs, not all students necessarily rotate to every station. Finally, lab rotation consists of students moving to different locations on campus to learn a subject (or subjects) predominantly online. Lab rotation describes the implementation of blended learning studied in the present article.

Research Support for Blended Learning

For the general student population, blended learning may be a more effective learning environment than the traditional brick-and-mortar school. In a meta-analysis of 45 studies on blended learning, Means, Toyama, Murphy, and Baki (2013) reported that blended learning tends to be more effective than traditional face-to-face learning, and that fully online learning’s effectiveness is equivalent to face-to-face instruction. Seven of the studies included focused on K-12 learners. The meta-analysis predominantly sampled students in the general population: It included only one study on students with disabilities. The one study on students with disabilities (Englert, Zhao, Dunsmore, Collings, & Wolbers, 2007) did, however, demonstrate support for the effectiveness of blended learning. Englert, Zhao, Dunsmore, Collings, and Wolbers (2007) found that a web-based instructional program produced superior improvements in writing achievement for students with disabilities compared to instruction provided using a paper-and-pencil modality.

In addition to considerations of effectiveness, the reasons that many students with disabilities enroll in online schools, including blended learning, are indicative of other potential benefits. Work by Rhim and Kowal (2008) indicates that, for students with disabilities, online instruction offers the potential for individualized instruction and appeals to parents seeking ways to optimize their child’s learning (Rhim & Kowal, 2008). Burdette, Greer, and Woods (2013) interviewed state special education (SPED) directors regarding why districts are moving to more blended and fully online instruction. In discussing parent motivation, the state directors indicated that online learning holds potential for more flexibility and alternatives to traditional scheduling and instructional methods.

Disagreements and Concerns About Online Learning’s Efficacy

While enrollments in online learning continue to increase, some evaluation results in national- or state-specific studies have not been positive. The Center for Research on Education Outcomes (CREDO; Woodworth et al., 2015) conducted a study in 18 states to explore the outcomes of fully online learning in charter school settings. In this study, a public school operated as a charter school (as defined by the state) that used online learning as its primary means of curriculum delivery. Woodworth et al. concluded that fully online schools overwhelmingly produced weaker achievement for students with disabilities when compared to traditional (i.e., brick-and-mortar) schools.

In Michigan, online enrollments have increased significantly among high school grade levels (Friedhoff, 2015)—the course completion rates have not. The percentage of online enrollments with a “completed/passed” outcome was 57% in 2013–2014, down 3% from the previous year. In contrast, the same learners had completed/passed rates of 71% in their face-to-face courses. The students who did not take courses online had an 89% completed/passed rate. Thus, the opportunities afforded by online instruction have not yielded correspondingly improved outcomes.

The conclusions from a recent qualitative study conflict with the above findings in terms of online learning’s efficacy for students with disabilities. Franklin, Rice, East, and Mellard (2015) interviewed five administrators of blended learning programs regarding the enrollments, persistence, progress, and achievements of students with disabilities. These program administrators indicated that, in the blended programs they oversaw, students with disabilities were outperforming their peers without disabilities in terms of growth in academic achievement. This finding is surprising given that CREDO’s research on fully online charter schools showed that online programs produce weaker achievement outcomes for students with disabilities than do traditional schools and that research in traditional schools indicates that students with disabilities tend to have lower achievement levels than students without disabilities (e.g., Cortiella & Horowitz, 2014; Wagner, Cameto, & Levine, 2006). Further, research shows that the gap between students with disabilities and those students without disabilities tends to grow larger as children move into higher grades (Klein, Wiley, & Thurlow, 2006), which implies that students with disabilities’ academic growth rate is slower than that of their peers without disabilities. The claim, then, that students with disabilities’ achievement in blended learning is growing faster than their peers without disabilities runs contrary to what would be expected. Thus, these claims require further investigation.

The performance gap between students with and without disabilities has been most prominent in reading achievement (Wagner et al., 2006). Because reading is the area in which students with disabilities have historically demonstrated the most difficulty, this study focused on students’ reading achievement growth. In order to account for important contributors to reading achievement, the design also incorporated two other variables: students’ self-efficacy rating and gender status.

Variables Important to Academic Achievement

Self-efficacy, as defined by Bandura (1986, p. 391), is “people’s judgments of their capabilities to organize and execute courses of action required to attain designated types of performances.” Wigfield and Guthrie (1997) concluded that reading self-efficacy was one of the strongest predictors of academic achievement. The authors also found that female students were generally more efficacious (i.e., judging themselves as more capable on reading tasks) than were male students. Because reading efficacy predicts reading achievement, a reasonable hypothesis is that males’ lower reading efficacy would result in lower reading achievement. The available literature generally confirms this hypothesis. Lietz (2006) conducted a meta-analysis of 139 studies on gender differences in reading achievement at the secondary school level. Lietz concluded that female students consistently outperformed their male peers on measures of reading achievement. This present study investigated the impact of gender on changes in reading achievement over time for both SPED and general education students in a blended learning environment. The study also examined the relationship of reading efficacy with student achievement within a blended learning curriculum.

Based on the research cited above, this study investigated the relationship among disability status, gender status, and self-efficacy in regard to reading achievement in blended learning. The research hypotheses of this study predicted finding significant group differences between categories of gender and disability status, as previous research had found these differences in traditional (e.g., nonblended) schools. Researchers also predicted the continued significance of self-efficacy as it relates to academic achievement in blended learning. Research questions included:

  1. Research Question 1: Does the use of a supplemental blended learning curriculum lead to different student growth (reading test scores changes over time) as compared to a traditional (i.e., nonblended) classroom curriculum?
  2. Research Question 2: Does the amount of exposure (i.e., dosage) of treatment lead to different levels of change in student reading achievement?
  3. Research Question 3: Do students in SPED have different trends of reading growth than general education students?
  4. Research Question 4: Are there differences in student reading growth depending on student gender?
  5. Research Question 5: Does student reading efficacy continue to correlate with student performance in a blended learning environment?

In this quasi-experimental design study, the growth of sixth-grade student English/language arts (ELA) test scores in two blended learning schools was compared to the growth of sixth-grade ELA test scores of students in one traditional school. Because these schools were not selected at random, selection effects must be considered in the analyses and findings. This study looked at growth over the school year while using baseline academic ability as a covariate rather than looking at only mean differences among schools on a single outcome time. Using the data in this design, some selection effects can be accounted for, as student growth is not confounded by previous achievement.

Selection Procedures

The school district initially identified four middle schools for this study: two comparison (i.e., face-to-face) schools and two treatment (i.e., blended) schools. Schools were identified based on many factors, including geographic proximity to each other, level of technology implementation, building-level administrator support, and demographic makeup. After implementation of the study began, the building administrator of one comparison school chose not to participate and withdrew from the study. Due to district and project administrative concerns, including a replacement school was not feasible. Thus, comparisons were made between the three remaining schools only.

District and School Demographics

The school district from which the samples were drawn was located in a suburban/rural area. The district neighbored a large metropolitan area in the Southeastern United States. The county’s population was more than 200,000 in 2013. In 2015, the school district enrolled 41,000 students at 50 attendance centers: 28 elementary schools, 11 middle schools, and 11 high schools. Fifty-three percentage of students in prekindergarten through 12th-grade enrolled in 2015 were eligible for free or reduced lunch (FRL).

Considerable differences were noted in the ethnic/racial makeup of the three schools, particularly between the two treatment schools (Blended English Language Arts [BELA1] and BELA2) and the single comparison school (teaching English language arts [TELA]). Enrollment at BELA1 had a considerably larger White population and a proportionally smaller Black population compared to BELA2 and TELA (see Table 1 for further details about the sample). Rates of FRL status varied considerably among the two treatment schools and the comparison school: BELA1’s FRL rate was 48.78%, BELA2’s was 49.49%, and TELA’s was 78.07%.

Table

Table 1. Students’ Demographics in Final Analysis Sample.

Table 1. Students’ Demographics in Final Analysis Sample.

Note. Values exclude any student with less than 10% of the percentage by count variable. Values are calculated using listwise deletion. Student demographics percentages were within 2% of state Department of Education (DOE) reported demographics for the 2014–2015 year, with the exception of those marked with an asterisk. All were within 3.5%. DOE reported the White percentage at BELA2 as 22.0%. For TELA, Black: 61.6%, Hispanic: 10.13%, and White: 21.1%. BELA = blended English language arts; TELA = teaching English language arts.

Design of the Intervention

This intervention had two components: general education classroom instruction and online instruction. The blended learning course was designed to teach students to read critically, analyze text, and cite evidence in order to support ideas. The course also sought to improve vocabulary, listening skills, and grammar through explicit modeling and practice. Students also engaged in routine response writing activities based on the readings and more extensive essay writing.

The BELA program is a package of supplemental curricular materials for ELA in a blended classroom environment. This package of materials is commercially available and was licensed by the district from BELA, Inc. The students’ specific outcomes, as stated in the BELA program overview, were being able to read complex texts at grade level; understanding and being able to analyze the structure and elements of literature from various genres; increased academic and domain-specific vocabulary; being able to use text evidence to analyze, infer, and synthesize ideas; engaging in routine writing in response to texts read and analyzed; using the writing process to complete a variety of essay writing assignments; using research skills to access, interpret, and apply information from several sources; gaining the tools for speaking and listening in discussions and presentations; and learning a variety of real-world and digital communication skills.

The BELA program is designed to complement the physical classroom curriculum. In this study, students in the two treatment classrooms spent several 50- to 70-min periods per week throughout the school year working on BELA curricular materials in a computer lab. The BELA computer lab sessions supplemented daily (or near daily) face-to-face classroom instruction: instruction in a physical classroom with a certified ELA teacher. The study was conducted during the second year of these two schools’ implementation, 2013–14 being their pilot year.

Students were informed that the blended course would require the same amount of effort as courses taught in the traditional classroom. In the blended course, students were required to participate via interactive lessons, which included direct instruction and modeling of skills in developing reading comprehension. These computer–student interactive lessons also included guided and independent reading activities. The online component incorporated a range of assignments, such as students answering comprehension questions and completing on-screen grammar exercises, short writing, and extended writing. Formative assessments included quizzes, tests, and exams, each incorporating more items and more comprehensive content reviews. The lessons were designed using best design practices in multimedia instruction to reduce cognitive load and help students learn more effectively through the use of a variety of features, such as audio narration, using two modalities for complex content, avoiding splitting attention, and breaking things into parts. The lessons also incorporated universal design for learning (UDL; Center for Applied Technology, 2011) principles. UDL principles included in the instructional design were multiple means of representation (e.g., video lectures, graphic displays, simulations, closed captioning, and text to speech), multiple means of action and expression (e.g., discussion forms, multimedia composition software, virtual manipulatives, and graphing calculators), and multiple means of engagement (e.g., self-pacing, pause and rewind, features to highlight/markup text, and tools to take notes electronically). Teachers were expected to interact with students via digital discussion, e-mail, chat, and system announcements, and students were expected to interact digitally with one another. Students were assigned 12 total units throughout the school year, grouped by quarterly themes: identity, perseverance, heroism, and community.

Students at TELA, the comparison school, did not participate in blended learning and received all of their ELA instruction in a face-to-face classroom. TELA used the same student goals and outcomes and used the same district-adopted curriculum as the treatment schools. All teachers had flexibility in choosing supplemental materials to meet their students’ needs.

Implementation of Intervention

BELA1 implemented the intervention as semester-length courses in which students spent 70 min of every school day (5 days a week) in the computer lab working on BELA curricular activities. These lab sessions were monitored by two paraprofessionals and the computer lab had no more than 70 students in it at one time. In addition to the BELA curriculum, students spent 50 min each school day in a traditional classroom receiving instruction from a certified ELA teacher.

BELA2 implemented the intervention as a series of 4- to 6-week courses in which students spent 2 days a week, for 50-min periods, in the computer lab engaged with the BELA curricular activities. The computer lab was monitored by a certified ELA teacher and the lab had no more than 35 students in it at one time. Students also spent 2 days a week, for 50-min periods, in a traditional classroom receiving instruction from a certified ELA teacher.

Students at TELA did not engage with the BELA curricular activities. Students attended 50-min ELA instruction 5 days a week, in which they were instructed face-to-face by a certified ELA teacher.

The above information and additional implementation information are presented in Table 2. Several important distinctions existed between implementation at the two BELA schools. Three key differences were: At BELA1, the BELA program was delivered as a semester-long course while at BELA2, the same content was organized in shorter topical units of 4–6 weeks in duration; the amount of time students spent at computer labs differed considerably, with BELA1’s students spending 350 min a week on BELA curricular activities, compared to BELA2’s total of 100 min per week; and lab sizes were much larger at BELA1 compared to BELA2—70 versus 35 students. It is worth noting, however, that BELA1’s students were permitted to work on any BELA curricular material during lab time, not only ELA material, whereas BELA2’s students spent the entire 100 min each week on BELA ELA material.

Table

Table 2. ELA and BELA Instructional Opportunities.

Table 2. ELA and BELA Instructional Opportunities.

Note. ELA = English language arts; BELA = blended English language arts; NWEA = Northwest Evaluation Association; MAP = measure of academic progress; TELA = teaching English language arts.

Differences between BELA1 and BELA2 also existed in professional development and instructional coaching of staff. Teachers and staff at all three schools were offered professional development on the use of the Northwest Evaluation Association (NWEA) measure of academic progress (MAP) assessments (NWEA, 2003). The professional development activities were scheduled and conducted by NWEA and by BELA for the use of their respective contributions to the treatment condition. Teacher participation in these professional development activities varied. Teachers at the two treatment schools received differential amounts of professional development in the use of BELA. At BELA2, teachers did not participate in the initial orientation sessions and implementation session. The computer lab instructor at BELA2, however, did participate in the professional development. At BELA1, two of the three teachers participated in the professional development sessions, but the computer lab supervisors did not. These differences in implementation may have led to a considerable disparity in students’ exposure to and opportunities to learn the ELA curricular materials.

Student Participants

School district staff provided demographic information for 769 students across the three schools. From the total student sample, both pre- and posttest MAP scores were available for 497 students. Two of those 497 students were found to have completed less than 10% of their BELA curricular activities and were excluded from analysis. Of the remaining 495 students, 355 students were enrolled in the treatment schools and 140 were enrolled in the comparison school. See Table 1 for a breakdown of student demographics.

Ten and one half of a percentage (82 students) of the total student sample were designated as having an individual education program for SPED services. For the respective schools, BELA1’s percentage of students in SPED was 6.0%; BELA2, 10.3%; and TELA, 16.2%. Forty-four SPED students were included in the analysis sample after listwise deletion. For the respective schools, the percentage of students in SPED with complete data was 2.7% at BELA1, 11.2% at BELA2, and 14.3% at TELA. The students’ specific disability categories were not available to researchers.

Classroom Environment

One researcher and two BELA staff members recorded observations in the ELA classrooms among the three schools in order to describe the classrooms. Teachers at all three schools appeared to spend the majority of their time among their students (i.e., moving among the rows or work groups) or at the front of the classroom. The instructional grouping—how students were configured during instructional time—was mostly whole-group instruction, seconded by one-on-one instruction. The students did some work in small group configurations of 2–4 students. Students worked mostly on worksheets or read from reading materials (e.g., textbooks or novels). The overall impression of the observers was that classroom management was good and that students appeared to be engaged and on task.

Teacher behavior was typically divided between three key areas: directing the students (e.g., telling students which book to use), attending to the students as they engaged in activities (e.g., monitoring students as they read silently), and communicating academic content.

Noticeable differences were observed in the computer labs between BELA1 and BELA2; lab configurations, dynamics, and number of students varied between the settings. Compared to BELA2, BELA1 had substantially higher numbers of students. BELA1 also had two different lab settings, while BELA2 had only one, and BELA1 had two staff members in the role of instructional aides. These instructional aides focused on maintaining classroom order, providing technical assistance, and addressing some content questions. At BELA2, a certified teacher, as opposed to an instructional aide, provided closer supervision and monitoring of student activities. She appeared to have fewer classroom management issues possibly due to the reduced class size.

Measures
Measures of academic progress

The efficacy of the intervention was assessed using the reading section of the NWEA MAP. The MAP reading test is a common core-aligned, computer-adaptive assessment administered to students in Grades 3–12. The MAP is adaptive in the sense that subsequent question difficulty is based on student performance on preceding items. Each MAP assessment uses the Rasch unit, an equal interval scale score, to measure student growth and determine student mastery of various defined skills within disciplines. MAP scores have no set lower or upper boundaries, although scores are typically between 150 and 300 (NWEA, 2003). Marginal reliabilities for the fall and spring MAP reading test for sixth-grade students were .94 in the validation study. Because MAP scores have norms for both fall and spring, a student may maintain the same scaled score throughout the year, yet decline in their normative percentile rank. Because of this effect, analysis in the present study was conducted using percentile ranks to better illustrate trends in normative student performance.

The MAP was administered to students at each school 3 times throughout the school year: September, January, and May. From the total student sample, as specified above, 495 students completed all three assessments.

BELA average percentage of activities completed (i.e., dosage)

BELA’s web administrator provided the students’ task completion data. Each completed assignment on any computer-based ELA-related class activity was aggregated across the school year. This variable was used as a measure of treatment dosage, representing how much exposure to the BELA curricular activities a given student received.

BELA average overall grade

Students were graded on several curricular activities each semester within the BELA program. The average grade of all such activities across the entire school year was used in the analysis. The average grade does not reflect student performance with their ELA classroom activities outside of the online platform. These data were not available for the comparison school as the measure is specific to online activities.

Student reading efficacy

A short survey was used to measure students’ reading efficacy, and this measure was given twice, first in January and second in May. The survey used four questions from Wigfield and Guthrie’s (1997) study on motivation in reading. Nine items were originally selected from the Wigfield and Guthrie Motivation for Reading Questionnaire (revised) based on researchers’ appraisal of relevance to the current study. Students were given a 43-item survey which utilized these nine questions as well as 34 items from other sources which inquired about additional dimensions of students’ noncognitive profiles (e.g., behavioral dissatisfaction). These other constructs were not used in the present study. A principal components analysis using a varimax rotation was conducted with the January administration sample. Individual items with poor factor loadings were successively pruned from the analysis until simple structure was obtained. The analysis resulted in a 33-item instrument with seven factors. The only factor used in the present study was reading efficacy.

Although 9 items were originally included from the Wigfield and Guthrie’s (1997) reading motivation scale, only 4 items remained in the reading efficacy factor post the principal components analysis: I don’t know if I will do well in reading this year, I am a good reader, I read because I have to, and I don’t like reading something when the words are too difficult. Students recorded one of five fixed response choices: totally untrue, mostly untrue, somewhat true, mostly true, and totally true. On this efficacy measure, 698 students responded during the first administration and 652 responded during the second administration. Five hundred and sixty-three students completed both surveys. Cronbach’s α was calculated for the 4-item scale using the January administration, as it was the larger of the two: the value was calculated at .656 with a sample of 657 students who answered all four questions.

Analysis

The analytical approach was to study changes in student MAP test scores over time, and how rates of change differed between gender, school, and SPED status. A separate correlational analysis was performed to investigate the relationship between reading efficacy and student reading test scores.

MAP scores

A repeated measures analysis of covariance (repeated ANCOVA) was performed on the results of the January and May administrations of the MAP reading test, using the September MAP reading as a covariate; students’ percentile ranks were used in the analyses. SPSS (Version 23) software was used for computing the analysis. The analysis tested for change in students’ percentile rank between the January and May administrations while also examining interaction effects with school setting, SPED status, and gender. The students’ mean MAP percentile rank and corresponding standard deviation for each school are included in Table 3. Table 3 also includes the aggregate of the students’ MAP percentile rank scores and average grade broken out by students’ SPED status.

Table

Table 3. NWEA MAP Percentile Rank Means and BELA Average Grades by School and by Special Education (ED) Status.

Table 3. NWEA MAP Percentile Rank Means and BELA Average Grades by School and by Special Education (ED) Status.

Note. Values are calculated using final analysis sample. Average grade is the percentage correct of students’ completed assignments in BELA. BELA = blended English language arts; NWEA = Northwest Evaluation Association; MAP = measure of academic progress; TELA = teaching English language arts.

Several indices of BELA1 and BELA2 students’ work in the BELA online program are included in Table 4. These indices reflect student engagement and achievement with the online curriculum, including the average grade for completed BELA assignments, time spent completing BELA curricular activities, and percentage of BELA assignments completed. Summary statistics of these three variables are included for each school.

Table

Table 4. Descriptive Statistics for Students’ BELA Data.

Table 4. Descriptive Statistics for Students’ BELA Data.

Note. Values are calculated using final analysis sample. Mean, median, and standard deviation are rounded to the nearest integer. Skewness and kurtosis are rounded to the nearest 10th. BELA = blended English language arts.

MAP percentile rank test scores and average grades as broken out by gender and SPED status are included in Table 5. Table 6 provides the students’ MAP percentile rank scores and average grade as broken out by school and gender. The average grade for completed assignments in the BELA materials is not available for the TELA students because they did not access the online curriculum as part of the study.

Table

Table 5. NWEA MAP Percentile Rank Means and Average Grades Split by Special Education Status and Gender.

Table 5. NWEA MAP Percentile Rank Means and Average Grades Split by Special Education Status and Gender.

Note. Values are calculated using final analysis sample. NWEA = Northwest Evaluation Association; MAP = Measure of Academic Progress; SPED = special education.

Table

Table 6. MAP Percentile Rank Means and Average Grades Split by School and Gender.

Table 6. MAP Percentile Rank Means and Average Grades Split by School and Gender.

Note. Values are calculated using final analysis sample. BELA = Blended English Language Arts; MAP = Measure of Academic Progress; TELA = teaching English language arts.

Average percentage of activities completed

The impact of dosage of treatment on student achievement was tested using the percent complete variable as a covariate in the analysis. As in the previous analysis, student pretest results were also treated as a covariate and the dependent variable was the change between January and May MAP test percentile ranks. This test analyzed to what extent students’ assignment or task completion data were useful for explaining variance in their achievement between the two MAP test administrations.

Reading efficacy

A separate correlational analysis was conducted to compare the results of the reading efficacy measure with student test scores. Student grades were also included in this analysis to provide a more complete picture of student achievement.

Figure 1 includes the average percentile rank scores for students in the three schools for the three test administrations, September, January, and May. The scores show a general decline in achievement. Averaged across the three schools, the students’ May average percentile scores were the lowest of the three administrations: 46, 42, and 35 chronologically. The evaluation of students’ learning as measured by MAP reading percentile scores yielded statistically significant results for several factors in the research design. Statistical significance was determined using an α level of ≤.05. Two three-way interaction effects were statistically significant for these factors: (1) Administration Time × Gender × SPED status and (2) Administration Time × Gender × School. One statistically significant two-way interaction was also identified for the factors: Administration Time × School. The results of this repeated ANCOVA are presented in Table 7.

figure

Figure 1. Average percentile ranks by school and time of test administration.

Table

Table 7. Statistical Information From ANCOVA Analysis.

Table 7. Statistical Information From ANCOVA Analysis.

Note. SPED = special education; ANCOVA = measures analysis of covariance.

A post hoc analysis of the simple effects of the three significant interaction terms was conducted to better understand these results. Specifically, the effects were analyzed for significant change between the January and May test administrations. The Bonferroni adjustment was used to minimize Type 1 error rate (i.e., researchers conducted three post hoc tests, thus α was set to ≤.0167).

Administration Time × Gender × SPED Status

For the three-way interaction term of Administration Time × Gender × SPED status (Figure 2; Table 8), post hoc analysis was conducted by splitting the gender variable, then splitting the SPED status variable, and evaluating the score changes between January and May. Female and male general education students performed significantly worse on the May MAP administration compared to the January administration. Both male and female SPED students demonstrated no significant change (positive or negative) between the January and May administrations. As can be seen in Figure 2, male SPED students’ scores declined: This decline, however, was not significant at α ≤ .0167 level of significance.

figure

Figure 2. Interaction effect of Administration Time × Gender × SPED status.

Table

Table 8. Adjusted Means Used in Interaction Effect Test for Administration Time × Gender × SPED Status.

Table 8. Adjusted Means Used in Interaction Effect Test for Administration Time × Gender × SPED Status.

Note. SPED = special education.

Administration Time × Gender × School

For the three-way interaction term of Administration Time × Gender × School (Figure 3; Table 9), post hoc analysis was conducted by splitting the gender variable, then splitting the school variable, and examining score changes between January and May. Female students at BELA1 performed significantly worse on the May administration compared to the January administration, while female students at the other two schools demonstrated no significant change in any direction. Males at both BELA1 and TELA performed significantly worse on the May test compared to the January test, whereas male students at BELA2 demonstrated no significant change.

figure

Figure 3. Interaction effect of Administration Time × Gender × School.

Table

Table 9. Adjusted Means Used in Interaction Effect Test for Administration Time × Gender × School.

Table 9. Adjusted Means Used in Interaction Effect Test for Administration Time × Gender × School.

Note. BELA = blended English language arts; TELA = teaching English language arts.

Administration Time × School

For the two-way interaction term of Administration Time × School (Figure 4; Table 10), post hoc analysis was conducted by splitting the school variable and looking at changes between January and May. Students at both BELA1 and TELA performed significantly worse on the May administration compared to the January administration, while students at BELA2 demonstrated no significant change in scores.

figure

Figure 4. Interaction effect of Administration Time × School.

Table

Table 10. Adjusted Means Used in Interaction Effect Test: Administration Time × School.

Table 10. Adjusted Means Used in Interaction Effect Test: Administration Time × School.

Note. BELA = blended English language arts; TELA = teaching English language arts.

The significant result of the Administration Time × School interaction indicates that, with the exception of students at BELA2, students actually did worse, normatively speaking, at the end of the year than they did in January. The meaning of this significant effect is difficult to interpret (see the Discussion section for elaboration).

Student Reading Efficacy

Correlations of students’ reading efficacy scores with MAP reading percentiles and overall BELA program grades are presented in Table 11. Correlations with test scores varied considerably depending on administration time of both the survey and the MAP test. These positive correlational values ranged from small to moderate (.191 to .417). A similar range of correlational values was found between the students reading efficacy scores and their average grades on their BELA assignments (.167 to .324).

Table

Table 11. Correlations of Reading Efficacy With MAP Percentile Ranks and Average Grades.

Table 11. Correlations of Reading Efficacy With MAP Percentile Ranks and Average Grades.

Note. Decimals removed. Calculations are based on pairwise completeness. Sample sizes for the pairs ranged from n = 145 to n = 266. Test scores and grades include only those data that were at or above 10% of activities complete based on the average percentage of activities complete variable. BELA = blended English language arts; MAP = measure of academic progress; TELA = teaching English language arts.

The purpose of this study was to investigate the relationship among disability status, gender status, and self-efficacy in regard to reading achievement growth in blended learning. The results suggest that students experienced significant outcome effects, as measured on the MAP reading percentile ranks, depending on their school of attendance, SPED status, or gender. The conundrum is that although the results are statistically significant, the results generally reflect a significant drop in performance between the January and May test administrations. If the significant interactions found in this study were taken at face value, the conclusion would be that female students’ scores in the SPED programs—averaged across all schools—remained level, while both female and male students’ scores in general education declined, and male students’ scores in the SPED program declined but not significantly; that female students’ scores at BELA1 declined while female students’ scores at the other two schools stayed the same; that males’ scores at BELA2 stayed the same while males’ scores at other schools declined; and that scores at BELA1 and TELA declined while scores at BELA2 remained level. The researchers are disinclined to make these conclusions, however.

In an intervention study such as this, the expectation is that students will achieve higher levels of performance over time; yet, in this study, scores generally declined (Table 3). The results are in contrast with what is expected across treatment schools and comparison schools: Thus, the validity of these results should be questioned. The results were not an appropriate evaluation of BELA, its instructional design, or relevant human–computer interaction features. These results were likely confounded by other factors that were not assessed in the study. The declining MAP scores raise several questions about whether the students were motivated to provide an accurate indication of their skills and abilities particularly in the final test administration. In addition, several observations raised questions about the fidelity with which the BELA program was implemented.

This study’s first question concerned whether usage of the BELA ELA curriculum influenced students’ reading performance as measured on the MAP. As indicated by Table 1, the amount of instructional time for ELA and usage of the BELA curriculum varied substantially among the schools. This variation in instructional time and BELA usage was assessed in the repeated ANCOVA via the interaction term of School × Time of Administration. Because the differences were part of the overall differences between schools, the school variable incorporates the differences in instruction between the schools.

A significant effect was found for this two-way interaction, School × Time of Administration. The result, however, was not that experimental, schools performed better than the control school or vice versa. Rather, one school (BELA2) demonstrated an upward trend in student performance, while the other two schools (BELA1 and TELA, a treatment and a comparison school, respectively) demonstrated a downward trend. That only one of the treatment schools showed a positive effect indicates that the significant interaction effect cannot be attributed solely to the BELA curricular and instructional activities but instead must be attributed to other factors that were not part of the manipulation. Further, the statistically significant result was predominantly due to students’ MAP score decline, despite expectations that all groups would make some detectable gains or stay level. This finding further complicates interpretation of the interaction and casts doubt on whether the effect is meaningful.

The second question, whether dosage of exposure to the BELA program related to changes, was answered in analysis using BELA’s calculations of students’ percent of completed assignments (i.e., of the assignments incorporated into the curriculum, what percentage a student completed). Again, no reliable effect was calculated. In this study, dosage did not contribute to a significant improvement in student test percentile ranks. This finding is particularly troubling in that one expects that the more time students spend engaged in academic learning, the more their performance should reflect improvement. Most students completed the majority of their coursework (50% of students completed more than 90% of their activities). The percent complete variable may not be a useful variable for interpreting student’s dosage due to the fact that very little variability existed in the percentage values for assignment completion.

The third question, whether students in SPED showed different trends of growth than did general education students, was answered by an interaction term from the repeated ANCOVA analysis. No significant interaction of SPED Status × Time of Administration was found. Both groups of students appeared to be progressing at a similar rate, which can be viewed as a positive outcome. Although students with disabilities performed below the level of students without disabilities, the achievement gap did not increase between the test administrations.

The fourth question, whether changes over time differ between genders, was also answered by an interaction term in the first repeated ANCOVA. Again, no significant interaction of gender by time of administration was found.

Finally, the fifth question, whether or not reading efficacy continues to correlate with reading achievement test scores, was answered by a correlation analysis. The results of this analysis demonstrated that weak to moderate correlations were found for both the blended treatment schools and the traditional comparison school. Since the MAP scores were used in this analysis also, these findings are considered very tentative.

Personal and Programmatic Influences

Regarding why most of the hypothesized effects were not found, several considerations seem plausible. Anecdotal reports from teachers, for example, indicated that students may have been fatigued around the time of the final MAP administration (in early May) due to recently having completed the state’s assessment, which took place during most of April. The MAP percentile scores generally show declines in performance on the May administration. If the students were fatigued and did not fully engage in the MAP assessment, the scores may not accurately represent their learning and achievement. In addition, the classroom instruction and issues with treatment fidelity are important to consider.

Although observations of the classroom, detailed in the Method section, were intended as notes for the researchers, they revealed several dimensions that are important to consider regarding treatment fidelity. Specifically, two substantial problems with treatment fidelity (i.e., parts of the treatment that were not implemented as intended) were that students did not have access to necessary audio-playback devices (e.g., headphones) for computer instruction and that student monitoring during lab sessions was insufficient.

Dane and Schneider’s (1998) research on treatment or implementation fidelity may help explain how effectiveness was compromised in the present study. Dane and Schneider identified five dimensions of intervention fidelity: adherence, exposure, participant responsiveness, quality of delivery, and program differentiation. Adherence is defined as the extent to which specific program components were delivered as prescribed. An example of adherence is whether the correct curricular materials were used. The exposure component refers to the number, length, or frequency of instructional or practice sessions. Participant responsiveness reflects the participation and enthusiasm of participants. Quality of delivery refers to qualitative aspects of intervention and includes the interventionist’s (i.e., the teacher’s) preparedness.

The fifth component of treatment fidelity is program differentiation. Program differentiation safeguards against diffusion of treatments; this component ensures that students received only the planned intervention (i.e., the ELA curriculum and the BELA curriculum). One might consider this component as instructional and curricular validity. A challenge in this study is that substantial variation was noted in the ELA instruction among the three schools. As indicated in the classroom observations, substantially larger lab sessions occurred at BELA1 versus BELA2, likely leading to differences in how students experienced instruction. These differences in instruction and curricular materials created a different BELA experience for the learners depending on which school the student attended. As a consequence of this ELA course variability, the level of congruity with the BELA program and the MAP assessment may have been different between schools. Consequently, the MAP reading items may not have been equally aligned with the students’ curricular and instructional activities, thus the scores may have had lower validity, that is, not accurately reflecting what the students were actually taught.

Without high levels of implementation fidelity, the evaluation is not an adequate or meaningful test of BELA’s effectiveness. A common means of assessing treatment fidelity is through classroom observations. Observers’ notes indicated that adherence and quality of delivery—two dimensions of fidelity—may have been unmet. Specifically, observers noted that at BELA1, blended learning activities were monitored by lab instructors rather than by certified teachers, and these lab instructors had not received the same professional development in the usage of the BELA product as had the certified teachers. As a consequence, the classroom instruction did not emphasize the students’ online instruction. Researchers speculate that this disconnect between the classroom curricular emphasis and the students’ blended online experience hindered their learning and achievement. The lab’s physical arrangement was also challenging given the high number of students in the setting. Generally, more than 50 students were present in the lab, which made monitoring and assisting students very challenging. Further, students did not have access to earphones for listening to the computer-based teaching. Students were in an instructional setting which required them to hear the computer-based presentation, but, due to the size of the class, they needed to keep the volume on their speakers low so as to reduce the overall noise in the lab. This situation may have compromised their ability to properly receive the intervention, thus further impacting adherence.

One of the paradoxes in the findings is that students appeared to demonstrate significant engagement with the BELA materials as indicated in the available metrics. Their usage time, percentage of task completion, and average grades were similar across the two treatment schools. In addition to potential test fatigue, the NWEA MAP is possibly not an appropriate criterion measure of ELA achievement in these schools. Students may have been engaged with the BELA supplemental materials but did not receive instruction paired directly with what was assessed on the NWEA MAP reading.

Although no significant Administration Time × School Effect was found, the null result still provides valuable information. Despite problems with implementation, a notable finding is that the control group did not significantly outperform the experimental group. As with any comparison of a new treatment against an existing treatment (i.e., nonblended learning, in this case), a possible outcome is that the new treatment will prove less effective than the old. While one expects that BELA would elicit a marked gain in student performance, the finding that the outcome was not a significantly lower student performance provides valuable information. Also, the finding that students with disabilities’ trend in academic growth was not significantly different from general education students may indicate that, at least in the environment studied in this research, students with disabilities are progressing at a rate similar to their peers in general education. Further, the results of this study shed light on some of the professional development and implementation factors that may have led to a lower quality of intervention delivery. Finally, the study found that reading efficacy continues to be an important factor in student reading achievement in this blended setting.

Limitations

This study has many limitations to consider. Despite attempts to conduct a well-designed quasi-experimental design, methods were compromised by the very small sample of students with disabilities (in the data set) who had completed all three NWEA MAP administrations. This sample contained only 5 students at BELA1 and totaled to only 44 students across the three schools. Clearly, with such a small sample, drawing conclusions is difficult and statistical power was limited for finding significant effects. Future studies would benefit from a considerably larger sample. Sampling, more broadly, was also limited in this study. Specifically, the listwise complete sample for the student NWEA MAP test scores was considerably smaller than the initial assessment. The smallest of the NWEA MAP administrations was 602 students, whereas the final sample of complete data was 495 students; more than 100 students were lost due to incomplete test administrations. As noted in the Discussion section, the study’s conclusions were also limited by what appeared to be test fatigue, resulting in student scores declining unexpectedly.

Conversations with the participating schools and staff about these limitations were beneficial. The school plans to replicate the methods of the current study with more support for students in their blended work, using the MAP as a formative assessment, engaging teachers in more professional development, and changing the computer lab implementation to have fewer students per lab proctor and to ensure that the necessary technology (e.g., headphones) are provided. Along with these steps, BELA intends to perform regular fidelity checks.

Although the study results did not generally imply that the treatment was superior to the comparison condition, the findings and subsequent improvements that will be made by the participating schools in terms of implementation are likely to improve the learning opportunities of future students. Researchers interested in studying blended learning, or school officials who are interested in implementing a blended learning program, would benefit from learning from the limitations of this study so as to begin their investigations or implementations with these limitations resolved.

Authors’ Note The contents of this article were developed under a grant from the U.S. Department of Education (#H327U110011). However, the content does not necessarily represent the policy of the U.S. Department of Education, and you should not assume endorsement by the Federal Government. Project Officer is Celia Rosenquist.

We thank the school district, BELA, and the involved teachers and staff for their help with implementing the study especially with the data collection activities. We are grateful for the research partnership with the school district and with BELA, without which none of our work would have been possible. We thank BELA particularly for their help and support in data collection, professional development for staff, and for implementing the outcome measures which used for analyses. Finally, we thank BELA’s staff for answering all of our many questions about their curriculum, their data sets, and for their help with making sense of the outcomes.

Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.

Bandura A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall. Google Scholar
Burdette P. J., Greer D. L., Woods K. L. (2013). K-12 online learning and students with disabilities: Perspectives from state special education directors. Journal of Asynchronous Learning Networks, 17, 6572.Google Scholar
Center for Applied Special Technology. (2011). Universal design for learning guidelines version 2.0. Wakefield, MA. Retrieved from http://www.udlcenter.org/sites/udlcenter.org/files/UDL_Guidelines_Version_2.0_(Final)_3.doc Google Scholar
Cortiella C., Horowitz S. H. (2014). The state of learning disabilities: facts, trends and emerging issues. New York, NY: National Center for Learning Disabilities. Google Scholar
Dane A. V., Schneider B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18, 2345. doi:10.1016/S0272-7358(97)00043-3 Google Scholar CrossRef, Medline
Englert C. S., Zhao Y., Dunsmore K., Collings N. Y., Wolbers K. (2007). Scaffolding the writing of students with disabilities through procedural facilitation: Using an Internet-based technology to improve performance. Learning Disability Quarterly, 30, 929. doi:10.2307/30035513 Google Scholar Abstract
Franklin T. O., Rice M., East T., Mellard D. (2015). Enrollment, persistence, progress, and achievement: Superintendent forum (Report No. 1). Lawrence: Center on Online Learning and Students with Disabilities, University of Kansas. Retrieved from http://centerononlinelearning.org/wp-content/uploads/Superintendent_Topic_1_Summary_UpdatedNovember11.2015.pdf Google Scholar
Freidhoff J. R. (2015). Michigan’s K-12 virtual learning effectiveness report 2013–14. Lansing, MI: Michigan Virtual University. Retrieved from http://media.mivu.org/institute/pdf/er_2014.pdf Google Scholar
Klein J. A., Wiley H. I., Thurlow M. L. (2006). Uneven transparency: NCLB tests take precedence in public assessment reporting for students with disabilities (Technical Report 43). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://education.umn.edu/NCEO/OnlinePubs/Technical43.html Google Scholar
Lietz P. (2006). A meta-analysis of gender differences in reading achievement at the secondary school level. Studies in Educational Evaluation, 32, 317344. doi:10.1016/j.stueduc.2006.10.002 Google Scholar
Means B., Toyama Y., Murphy R., Baki M. (2013). The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record, 115, 147. Google Scholar
Northwest Evaluation Association. (2003). Technical manual: For use with measures of academic progress and achievement level tests. Portland, Oregon. Google Scholar
Rhim L., Kowal J. (2008). Demystifying special education in virtual charter schools. Alexandria, VA: TA Customizer Project, National Association of State Directors of Special Education. Retrieved from http://www.charterschoolcenter.org/sites/default/files/Demystifying%20Sped%20in%20Virtual%20Charter%20Schools%202008.pdf Google Scholar
Staker H., Horn M. (2012). Classifying K-12 blended learning. San Mateo, CA: Clayton Christensen Institute for Disruptive Innovation. Google Scholar
Wagner M., Newman L., Cameto R., Levine P. (2006). The academic achievement and functional performance of youth with disabilities. A report from the National Longitudinal Transition Study-2 (NLTS2) (NCSER 2006-3000). Menlo Park, CA: SRI International. Retrieved from http://ncser.ed.gov/pubs Google Scholar
Wang Y., Decker J. R. (2014). Examining digital inequities in Ohio’s K-12 virtual schools: Implications for educational leaders and policymakers (Paper 19). Atlanta, GA: Georgia State University, Educational Policy Studies Faculty Publications. Retrieved from http://scholarworks.gsu.edu/eps_facpub/19/ Google Scholar
Watson J., Murin A., Vashaw L., Gemin B., Rapp C. (2012). Keeping pace with K-12 online learning: An annual review of policy and practice. Evergreen Education Group. Retrieved from http://www.inacol.org/wp-content/uploads/2015/03/KeepingPace2012.pdf Google Scholar
Wigfield A., Guthrie J. T. (1997). Relations of children’s motivation for reading to the amount and breadth or their reading. Journal of Educational Psychology, 89, 420. doi:10.1037/0022-0663.89.3.420 Google Scholar
Woodworth J. L., Raymond M. E., Chirbas K., Gonzalez M., Negassi Y., Snow W., Van Donge C. (2015). Online charter school study. Stanford, CA: Center for Research on Education Outcomes. Retrieved from https://credo.stanford.edu/pdfs/Online%20Charter%20Study%20Final.pdf Google Scholar

February 22, 2017

Article Notice – Universal Design for Learning: Scanning for Alignment in K–12 Blended and Fully Online Learning Materials

As I indicated yesterday in the Journal of Special Education Technology – Special Issue: Emerging Practices in K-12 Online Learning: Implications for Students with Disabilities entry, I’m posting the article notices from this special issue this week.

In the process of evaluating online learning products for accessibility, researchers in the Center on Online Learning and Students with Disabilities concluded that most often consultation guides and assessment tools were useful in determining sensory accessibility but did not extend to critical aspects of learning within the Universal Design for Learning (UDL) framework. To help fill this void in assessment, researchers created the UDL Scan tool to examine online learning products alignment to the UDL framework. This article provides an overview of how accessibility has been historically measured and introduces the need to move beyond the traditional understanding of accessibility to a broader UDL-based lens. With this understanding, a UDL Scan tool was developed and validated to investigate the alignment of online learning content to UDL. This article will present the process of development, the validation, and discuss how the measurements provide critical benchmarks for educators and industry as they adopt new online learning systems.

Although blended and fully online K–12 learning opportunities have grown in popularity, investigations into the central component of online learning are limited. Although there are a number of online and blended learning models (Christensen, Horn, & Staker, 2013) that alter the online learning experience for students, the constant design feature is that the significant majority (up to 90%) of K–12 online learning is instructed via prepackaged content and/or curriculum (Patrick, Kennedy, & Powell, 2013). Thus, unless districts or teachers invest time in designing learning experiences tailored to the individual learners, students are learning from materials that are likely developed by an outside vendor. Learners in blended or fully online environments interact with these prepackaged materials throughout their entire instructional experience, often from initial instruction through assessment.

As highlighted in Smith and Basham (2014), the role of the teacher in K–12 online environments is different from that of a traditional brick-and-mortar teacher. With the wide variety of K–12 online environments (e.g., fully online, blended, supplemental, personalized), the role of the teacher varies based on a number of factors associated with the learning environment. Minimally the learning environment is comprised of the learner, the adopted online system or various systems, the physical environment (e.g., an active classroom, a computer lab with 100 other students, a desk at home, a kitchen table, a couch at home), other individuals, if any, within the environment (e.g., adults, other learners, caregivers). Importantly, depending on the online learning model, adopted online system, and the expectations of the environment, research has indicated that often the primary role of a traditional teacher as the instructor is replaced by that of an online system (Rice & Carter, 2015a, 2015b).

What is unknown by those outside K–12 online education is that school districts and classroom teachers typically do not develop their own lessons for many online environments (Rice & Carter, 2015a; Smith & Basham, 2014). The investment of time and resources required by school districts or the classroom teacher to create online content is often simply too prohibitive. The development of online curriculum and discipline-specific content places additional demands on resources that are often already overwhelmed. Instead, the materials are typically developed by and purchased from vendors who offer prepackaged learning products at a more reasonable cost.

These online products come in the form of digital lessons, activities, and resources, structuring the learning experience and directing what the student completes on a daily basis and across the entire course. The teacher is the instructor of record, but the vendor-based digital lesson and digital system drive the learning experience through specific lessons, activities, accompanying assessments, and the predetermined path for subsequent lesson completion (Basham, Stahl, Ortiz, Rice, & Smith, 2015; Rice & Carter, 2015a). In essence, the digital lesson/material offers the actual learning experience for many blended and fully online learners, and any teacher actions supplement this experience (Rice & Carter, 2015a, 2015b).

Although the role of the online teacher may be disturbing for some, it is not the primary focus of this article. Of course, both the roles of the teacher and the online system are dynamic, based on environmental factors as well as innovations in technology (e.g., machine learning, artificial intelligence, intelligent agents). Nonetheless, the transformation of the teacher’s role in the online environment provides credence for further research in a number of areas. For instance, although K–12 online learning has received increased attention with research examining student outcomes, the examination of the prepackaged online content—the primary element of the K–12 online learning experience—has not received adequate attention (Smith & Basham, 2014). Research on the effects of prepackaged digital materials on student learning, specifically for the struggling learner and those with identified disabilities, is not represented in current research efforts.

Through research conducted in the Center on Online Learning and Students with Disabilities (Center), this article highlights the review of K–12 digital learning curricula and content within online learning systems. The article begins with a brief overview of accessibility guidelines for digital materials. We then describe some of the limitations for using only these standards in determining effectiveness of online learning curricula and associated content for students, especially for students with disabilities. Specifically, it is argued that using the Universal Design for Learning (UDL) framework as specified in the Every Student Succeeds Act (ESSA, 2015) along with current accessibility guidelines provides a stronger basis for the review of online learning materials. Finally, the description, development, and validation of a measurement tool used to measure the alignment of online learning systems to the UDL framework is presented. It is hoped this article will encourage further research and dialogue about the design and implementation of digitally driven K–12 learning environments for students with disabilities.

As growth in K–12 online learning experiences has increased, so has the number of struggling students and their peers with disabilities who are enrolled in online learning (Basham, Smith, Greer, & Marino, 2013). The inclusion of these students in both blended and fully online courses has demanded reflection and reconsideration of the appropriateness of the content and overall instruction. The recent policy scan presented in the Center’s publication, Equity Matters: Digital and Online Learning for Students with Disabilities, noted that only 36% of states guarantee that their K–12 environments are accessible for students with disabilities (Basham et al., 2015). Moreover, the lack of required data, as well as data sharing, on students in these online environments makes it difficult to ascertain the impact of these prepackaged learning materials on student outcomes. Thus, the growth in numbers, combined with the lack of guaranteed accessibility, requires a determination in the accessibility and, more importantly, usability, and even learnability of these digital materials, lessons, activities, and assessments for all students.

Accessibility Standards and Guidelines

The rights of all users to access digital content actually precede the recent trend in K–12 blended and fully online learning. In the United States, the amended Section 508 (1998) of the Workforce Rehabilitation Act of 1973 enhances access to broadband (e.g., Internet, online learning) technology and services for individuals with disabilities. Additional standards have followed, including the World Wide Web Consortium (W3C) accessibility guidelines for the W3C’s Web Accessibility Initiative (2014) and the International Digital Publishing Forum’s (IDPF) EPUB (2014) content publication standards. Outside the United States, the European Unified Approach for Accessible Lifelong Learning (EU4ALL, 2010) initiated the concept of accessible lifelong learning and the elimination of barriers to the interlinked worlds of education and work through the use of appropriate digital technologies.

There are two primary definitions that are frequently used in defining web or digital accessibility. They include (1) accessibility means that people with disabilities can use the web—people with disabilities can perceive, understand, navigate, interact, and contribute with or to the web (World Wide Web Consortium, 2005) and (2) technology is accessible if it can be used as effectively by people with disabilities as by those without (Yesilada, Brajnik, Vigo, & Harper, 2012). These definitions, combined with the standards and guidelines, shape the current measures used by digital material developers and school district personnel to determine whether K–12 content is appropriate for those with disabilities.

The application of the accessibility standards has sought to promote accessible digital designs for materials and navigation of the learning system. For example, in an applied sense, the standards promote design tips which include providing the text equivalent or closed captioning for animation and video content, color contrast and appropriate font size, transcripts of all audio and accompanying descriptions for any video, and frequent accessibility testing during and after the digital content development and overall course design (W3C, 2014). These features target alternate means of accessing the digital materials. The standards then require an alternate format, for example, supporting an audio file with a complete transcript or closed captioning the audio portion of a video.

Focused on ensuring the online K–12 marketplace had a minimal standard of accessibility, pioneers such as Rose (2007) wrote a report for the International Association for K–12 Online Learning (iNACOL) calling on developers and providers to meet the basic accessibility standards. With a primary focus on sensory and physical accessibility, Rose focused his report on the Office of Civil Rights’ (OCR) definition of accessibility which extends Section 508 guidelines to technology accommodations in order to access educational opportunity, and do so in a timely manner. OCR clarified the specific legal requirements specific to digital curriculum, which applies to the K–12 blended and online classroom by stating:

equal opportunity, equal treatment, and the obligation to make accommodations or modifications to avoid disability based discrimination—also apply to elementary and secondary schools under the general nondiscrimination provisions in Section 504 and the ADA. The application of these principles to elementary and secondary schools is also supported by the requirement to provide a free appropriate public education (FAPE) to students with disabilities. (OCR, 2011).

Although the OCR guidance document ensures that digital materials, delivery systems, and devices are accessible, the parameters of accessibility are restricted to sensory and physical consideration.

In an updated report for iNACOL, Rose (2014) again primarily focused on sensory and physical accessibility. While the report makes reference to UDL, the emphasis is on the accessibility portions of the UDL guidelines with the foundational focus on Section 504 and 508 provisions for digital information, with an added reference for access determinations to be based on W3C’s Web Content Accessibility guidelines. Recommendations, for example, suggest that OCR alignment constitutes closed captioning for animation and video products, tagging all graphics with corresponding text, carefully selecting and using color, and ensuring that all graphics have defined alt tags to allow for screen reader access. These approaches reinforce an accessibility evaluation process targeting a limited population of individuals who require these features or modifications.

To provide developers and educators with guidance in determining digital accessibility (especially alignment to Section 508 expectations), Hashey and Stahl (2014) introduced the Voluntary Product Accessibility Template (VPAT). Created to share specific product accessibility information with educators and other professionals seeking to acquire accessible digital materials, the VPAT examines devices, software, and digital materials to better determine how these materials can be used by those with visual impairments, hearing impairments, or limited mobility. The VPAT provides a thorough and detailed overview of the digital product and can make comparisons much easier for the user to understand and apply when making accessibility decisions.

Measuring Accessibility

As noted in Hashey and Stahl (2014) as a Center resource, the VPAT table was created to offer a quick review of more than 70 products used in K–12 online learning (see http://centerononlinelearning.org/resources/vpat/). The Center’s review, Quick Guide to Accessible Products in Education, offers a visual reference to the extent to which each product is accessible. The interactive VPAT table available through the Center’s website offers educators, developers, and other interested parties an understanding (or at least a starting point) of how to determine whether a product is appropriate for the K–12 learner, especially those with sensory and physical disabilities. As a standard, the VPAT does not provide for the majority of students with disabilities who have cognitive, learning, attention, or behavioral needs.

Moving Beyond Traditional Accessibility

Traditional accessibility concentrates on multiple formats but not alterations to the learning demands of the digital material. For example, providing an accessible digital text (e.g., online textbook, digital text-based lesson) often requires formatting the digital text to allow for a text-to-speech application to automatically read the text for individuals with print impairments (e.g., someone who is blind and cannot see the text). Accessibility, in this instance, does not measure for the potential of supports to encourage greater readability of text (Flesch, 1948; Mosenthal & Kircsh, 1998; Valencia, Wixson, & Pearson, 2014) or the ability to match digital text content to individual learners, based on actual readability and other associated metadata (Denning, Pera, & Ng, 2016). Moreover, this traditional understanding of accessibility, beyond sensory accessibility, neglects to identify other critical elements for supporting overall learning and comprehension. For instance, these scans do not measure potential for engagement (O’Brien & Toms, 2008), ability to resize the amount of text in lines (Schneps, Thomson, Chen, Sonnert, & Pomplun, 2013), supports for reducing the demands of content-specific vocabulary (Nagy & Townsend, 2012), use of multiple forms of media in digital content such as interactive simulations (Schneps et al., 2014).

Thus, traditional accessibility standards for digital materials address sensory and physical challenges, but are limited in how they support cognitive and learning barriers experienced by individuals with identified disabilities, along with their peers who may not have an identified disability but who struggle with reading, processing, memory, and similar cognitive demands associated with learning. Moreover, traditional notions of accessibility assume an intermediary will interact with content to make it more usable and/or that a teacher will use the materials in such a way to support the learning process. Unfortunately, as aforementioned, within many K–12 online learning environments teachers have very little control over the actual content, its delivery, and associated instruction within the system. Thus, wherein accessibility standards are an important starting point for considering online content, the current understandings and assumptions of accessibility fall short when considering the reality of K–12 online learning practice.

In order to ensure accessibility for learners, the cognitive accessibility and learnability of content within an associated tool should also be evaluated. Because most school districts are purchasing prepackaged curriculum and content from vendors, stakeholders in the purchase of those products need the relevant guidelines and tools to evaluate those products for appropriateness and accessibility for learners. While the Center’s VPAT evaluation of vendor-developed K–12 online learning products were effective in ensuring that the products conformed to basic accessibility guidelines and policies, this conformance alone falls short of assessing content and the associated systems for supporting usability or learnability of content in the learning process. As a starting point for assessing this usability, Center researchers determined that an analysis of K–12 blended and fully online learning should employ an evaluation of content adherence to the UDL framework. To measure accessibility using the UDL framework, Center researchers developed the UDL Scan tool to measure alignment of online learning content and associated systems to the principles, guidelines, and checkpoints.

Enter a Broader Understanding of UDL

UDL is an instructional framework that is based on substantial amounts of scientifically based research (e.g., Dalton, Proctor, Uccelli, Mo, & Snow, 2011; Kennedy, Thomas, Meyer, Alves, & Lloyd, 2014; Marino, 2009; Proctor et al., 2011; Rappolt-Schlichtmann et al. 2013). As a scientifically based framework, UDL works to support the variability of all learners by both proactively and iteratively designing learning with a focus on the integration of providing multiple means of engagement, representation of information, and action and expression of understanding. The framework is defined within the Higher Education Opportunity Act (HEOA, 2008):

… [UDL is] a scientifically valid framework for guiding educational practice that—(A) provides flexibility in the ways information—is presented, in the ways students respond or demonstrate knowledge and skills, and in the ways students are engaged; and (B) reduces barriers in instruction, provides appropriate accommodations, supports, and challenges, and maintains high achievement expectations for all students, including students with disabilities and students who are limited English proficient.

More recently, UDL was highlighted in the ESSA (2015) as well as the National Educational Technology Plan (NETP, 2016) as a basis for designing as well as implementing learning environments, systems, and assessments for all learners, especially learners with disabilities. Specifically, the language in ESSA indicates that districts should ensure that use of technology is not only accessible for all learners, but that systems also align to the UDL framework.

As highlighted in Rose (2014), UDL is often viewed only in terms of accessibility. In reality, the framework of UDL provides a much broader perspective than accessibility alone. As highlighted by CAST (2011) in the UDL guidelines, the framework moves from ensuring basic accessibility to an advanced or even metacognitive state of learning. This is evident by viewing the guidelines from either top to bottom or bottom to top (depending on the version of the guidelines). In the traditional print edition of the guidelines (with the principle of representation to the left) under Providing Multiple Means for Representation, the guidelines move from perception (sensory), to clarifying and decoding information (basic learning input), and finally to supports for comprehension and generalization of information (more advanced learning). Basham and Marino (2013) and Rappolt-Schlichtmann et al. (2013) discuss how UDL can be applied in broader perspectives than simply accessibility.

As highlighted in Basham and Marino (2013), UDL applies an engineering-based perspective to the way learning environments, curriculum, instruction, instructional tools, and assessment are both designed and utilized. Specifically, UDL-based instruction should consider the four critical elements of UDL instruction (2011); these elements include having clear goals, inclusive and intentional planning for variability, providing flexible methods and materials, and timely progress monitoring. These elements can be integrated into a five-step backward instructional design process:

  1. Step 1: Establish clear outcomes
  2. Step 2: Anticipate the learner variability
  3. Step 3: Design measurable outcomes and assessment plan
  4. Step 4: Design the instructional experience
  5. Step 5: Evaluate and reflect on new understandings

To assist districts and teachers in the implementation of UDL, there must be guidance provided in how associated instructional materials and systems support UDL-based instruction. If teachers understand the type of learner variability that specific products could account for, they could then consider this understanding in the instructional design and implementation process. In a basic example, if a teacher knew a product only supported content understanding in English and she had learners that primarily learned in Spanish, then she would know there was a need to find a different product or take other measures to support representation of content. Thus, any guidance on UDL alignment would require the measurement of UDL in product systems.

Unfortunately, there have been minimal attempts in measuring UDL as an entire framework. In fact, Basham and Gardner (2010) discussed the complexity of measuring UDL as a design framework, rather than a specific strategy or practice that can be easily observed in the environment. They indicated that any tool attempting to measure UDL would have to be multifaceted and measure the proactive design as well as the implementation within an instructional environment. Given that online learning tools are often being proactively designed and developed separate from instruction, measuring the design of these products for alignment to UDL is a necessary step in ensuring that districts and teachers know that they are adopting tools that are not only accessible but that also support the implementation of UDL.

Researchers at the Center undertook developing a tool that could investigate online learning product alignment to UDL, one that focused on developing such a measurement tool. Specifically, this project sought to answer:

  1. Can a UDL Scan tool be developed and validated, one that adequately measures the alignment of an online instructional product or system to the UDL framework?
  2. Using a UDL Scan tool, what is the usability or feasibility of conducting product scans?

The development and validation of a tool to measure content and curriculum alignment to the UDL framework required a multiphase design involving (a) item generation, (b) pilot review, (c) content validation, and (d) an assessment of reliability and construct validity.

Development of the Tool

Initial development of the UDL Scan tool began with an analysis of the UDL principles, guidelines, and checkpoints as well as a review of existing rubrics and observation instruments. To create a tool appropriate for evaluating online learning products, the developers of the UDL Scan tool who are known experts in UDL met with other known experts, including senior personnel at CAST, to discuss the components that should comprise the tool. Based on these initial meetings, the developers crafted evaluation questions, organized around the UDL guidelines and checkpoints, to identify whether UDL-based features were present within a product (and to what degree). As the tool was revised and refined, the developers continued to seek feedback from the UDL experts to ensure that the tool comprehensively assessed the three primary principles, nine guidelines, and numerous checkpoints of the UDL framework. This process involved a thorough consideration of the purpose of each of the principles, guidelines, and checkpoints, considering the stated text, the intent of the text, examples of the text, and how it would be applied in the field, especially in the area of blended and fully online learning. From these examinations, items were developed to ensure correspondence to elements in the UDL framework (see Table 1).

Table

Table 1. UDL Scan Tool Items in Correspondence to UDL Checkpoints.

Table 1. UDL Scan Tool Items in Correspondence to UDL Checkpoints.

Note. UDL = Universal Design for Learning.

The UDL Scan tool was created using Qualtrics Labs, Inc. software, Version 12.018 of the Qualtrics Research Suite (Qualtrics Labs, 2012). Using the Qualtrics software allowed the developers to make the evaluation tool online accessible for users. It also allowed the developers to employ skip logic in the survey, to ensure greater usability and ease in use. Essentially, depending on how primary questions were answered, follow-up questions would only be asked as applicable. Because the UDL Scan tool was developed online, evaluators could explore and test the product while also answering questions about the product in a separate browser window.

Once the initial questions were developed, thorough testing was conducted. To test the evaluation tool, the developers met as a group to practice using the UDL Scan tool to evaluate two online learning products. The walkthrough allowed the developers to identify features that were not adequately assessed and to troubleshoot the tool during use. Revisions were made to ensure that the questions asked through the tool both clearly and adequately assessed whether UDL features were available in the products being evaluated during testing.

This extensive testing allowed the developers to more narrowly define the scope of the UDL Scan tool. Although the tool has potential value for evaluating isolated learning management systems (LMS) used to house content (e.g., Blackboard), the developers instead chose to focus the initial tool on products that provide instructional content.

To accompany the UDL Scan tool, developers created a training manual for users. The manual provides detailed information about the expectations of the reviewer and how to use the tool, descriptions and examples of UDL features, and a glossary of the terms that appear in the UDL tool. The manual serves as a guide for teaching evaluators how to use the tool and as a resource for evaluators.

The Instrument

The UDL Scan tool provides researchers and educators with a measurement tool to review online content systems for their potential to support learner accessibility and variability. Each UDL guideline and checkpoint was mapped to specific features within a content system. Each item on the UDL Scan tool aligns with one of UDL’s three principles, nine guidelines, and at least one checkpoint, measuring each of them for each lesson evaluated.

The scan tool consists of 37 initial items with a total of 46 unique response items, including a tool for measuring product usability. The tool intuitively branches users to the specific questions they need for a thorough evaluation of the materials being scanned. If the tool is completed in its entirety (all branching items), there are a total of 146 items.

The UDL Scan tool consists of multiple choice and Likert-type scale questions. Answers are submitted online. The scan tool begins with a series of questions designed to gather information about the evaluator, including the type of browser the evaluator is using to examine the product. The initial questions also gather information about the types of products and lessons being evaluated. The subsequent questions are broken into sections associated with the UDL principles, guidelines, and checkpoints. Each section begins with a multiple-choice question designed to determine whether a product has features that incorporate a specific UDL checkpoint. If the evaluator determines the product does include, or might include, features related to that checkpoint, the evaluator is provided with more specific questions to identify the degree to which those features are accessible. However, if the evaluator determines that the UDL checkpoint is not a part of the product, the scan tool is designed to move the evaluator to the next UDL checkpoint so as to avoid asking the evaluator irrelevant questions. The questions are designed to determine how frequently aspects of a feature are available and to pinpoint specifically which examples of a feature are accessible to the users. For example, one question asks the evaluator to indicate on a Likert-type scale how frequently the product illustrates content through videos, audio, and still images.

Along with assessing the extent of UDL features available within online learning products, the scan tool also is used to assess the usability of the product. Specifically, the UDL Scan tool includes a set of Likert-type scale questions adapted from the System Usability Scale (Brooke, 1996). These questions are designed to measure how easy, or complex, a specific product is to use.

Procedures

To assess the interrater reliability of the scan tool, the following procedures were followed. Three graduate research assistants evaluated three product systems. Ten lessons within each of those product systems were randomly chosen and then evaluated for a total of 30 lessons across three products. Prior to evaluating the product systems, the research assistants attended a training session to learn how to use the Scan tool. During this training session, the trainer demonstrated how to access and use the tool and the online learning product systems being evaluated. The trainer also reviewed the training manual with the evaluators to ensure that the reviewers understood what was expected of them. During the training, the graduate research assistants were given an opportunity to ask questions, explore the online learning products, and practice using the scan tool. They also received a copy of the training manual to serve as a resource while using the Scan tool.

Data Analysis

Having three different raters utilize the scan tool across three product systems and 30 different lessons allowed for assessment of the interrater reliability. The interrater reliability analysis measured whether the three graduate research assistants evaluated each of the lessons in the same way. Krippendorff’s α (Krippendorff, 2004) and Fleiss’s κ (Fleiss, 1971) were calculated to determine reliability among raters. When 100% agreement was achieved across all three raters for each of the 10 lessons in the system, Krippendorff’s α and Fleiss’s κ could not be calculated due to no variation in the ratings. Because the cases of perfect agreement were not included in calculating the mean and median reliability values, they were biased toward less agreement. The degree of bias is not known and cannot be measured. Fleiss’s κ was calculated only when there was no missing rating; this amounted to most the cases in the initial system and some cases in the subsequent systems.

Interrater Reliability

In general, interrater reliability was supported in each of the three systems—a high percentage of agreement (66–71% in total, median = 90–100%) was observed; and Fleiss’s κ was greater than .20 (mean = .26–.40, median = .28–.40), suggesting fair agreement among the three raters (see Table 2). When only the initial items were examined, the total percentage of agreement slightly increased but the κ values dropped, but still fair in the System 1 and System 2. However, Krippendorff’s α was very low in all the three systems (e.g., negative mean and median α values), suggesting that disagreement between the three raters is systematic and, therefore, greater than what can be generally expected by chance (Krippendorff, 2004). In reality, disagreement was not systematic due to the small number of digital lessons evaluated and overlap of design principles characteristic in these materials.

Table

Table 2. Interrater Reliability Within and Across All Three Product Systems.

Table 2. Interrater Reliability Within and Across All Three Product Systems.

aThirty-seven initial items that were always rated regardless of system.

Across the country, online learning is growing at a rapid pace and a majority of the instructional products and tools used in these environments are purchased prepackaged from a vendor (Smith & Basham, 2014). The current focus on the traditional understanding of accessibility is critical to ensuring that learners with sensory and physical disabilities have basic access to these digital learning materials. Regrettably, while basic accessibility is still a need, current understanding of accessibility does little to support actual learning, especially in consideration of things such as cognitive accessibility. Within the United States, recent legislation (specifically ESSA, 2015) supports the need for districts to consider the implementation of UDL in the way they design and implement instruction. Understanding this need, this project sought to develop and test a tool for measuring a digital instructional product’s alignment to the UDL framework.

The outcomes of the project indicate that the UDL Scan tool demonstrated success in the measurement of UDL within digital instructional products. Moreover, the tool was developed in partnership with CAST, the founders of UDL. Thus, as an initial measurement tool, the UDL Scan tool demonstrates potential for measuring UDL alignment in digital instructional products. Potential uses of this tool include release for wider consumption and use. Specifically, by providing districts access to this tool, they will be able to evaluate products during the product acquisition process. As a result, it is hoped that districts will make more informed decisions in the procurement of digital instructional products. It is foreseen that product developers may also use such a tool to support the design and eventual self-report of a product’s alignment with UDL. Optimistically, supporting UDL alignment will advance the field’s understanding and acceptance of basic accessibility to a more advanced consideration of building product and systems with a focus on all learners.

Implications for Practice

Through the use of the UDL Scan tool, teachers have the potential to develop a more nuanced understanding of learner variability and how tools associated with instructional practices may help adequately support this variability. Moving beyond the basic understandings of accessibility, teachers also have the ability to take a larger role in ensuring all learners are actively engaging and demonstrating the desired outcomes in the learning process. Specifically, teachers can make better informed decisions about how to design, implement, and test learning experiences that meet the needs of the individual learners. From a teacher development perspective, this would advance a teacher’s need to more fully understand the conceptual, practical, and testable underpinnings of UDL and the instructional design process, thus enhancing a teacher’s ability to take on the mind-set and operational status of a learning engineer (Basham & Marino, 2013).

Implications for Future Research

The development and validation of the UDL Scan tool advances the field’s ability to more adequately assess and research the framework of UDL. Since conducting this initial study, the UDL Scan tool has been used within the Center to measure alignment on more than 1,000 individual pieces of content. Using the tool, researchers have measured alignment of certain popular, blended, and fully online content to the UDL framework (Smith, 2016). The goal was to help understand whether vendor-created K–12 online lessons were both accessible and appropriate for all students, especially those with disabilities. Finally, a next step would be to measure the instructional experience within these products. While a product may have alignment (or lack thereof) to UDL, there is also need to measure how a product provides actual instruction. Such an addition to the UDL Scan tool would allow users to evaluate whether online instructional systems (e.g., K12, Khan Academy) align to evidence- and/or research-based instructional practices.

Limitations

This study sought to research and test a UDL Scan tool for measuring the basic alignment of a digital instructional product to UDL. The Scan tool was tested with digital products that provided students with instructional materials rather than an LMS (e.g., Blackboard). Thus, the UDL Scan tool was not designed or tested in the ability to measure an LMS or Content Management System (e.g., WordPress) without embedded content or a designed instructional sequence. The tool also was not designed to measure a brick-and-mortar instructional lesson. Users would be cautioned in attempting to measure alignment of any instructional experience beyond the intended use of the tool. Finally, although the UDL Scan tool has demonstrated consistent findings across further scans, this initial study only used 30 instructional lessons, thus some caution must be considered given small number of these lessons when interpreting the mean and median of α and κ values.

The ability to move the field of K–12 online learning beyond the basic understandings of accessibility to a more advanced understanding of UDL will support better online learning materials for all learners, especially those learners with disabilities. As the K–12 education system moves increasingly online, it becomes more dependent on the educational technology industry to support the design of digital instructional materials and experiences. Thus, it is important for educators, researchers, and the industry to develop a shared understanding as well as expectations for these online materials and systems. The UDL framework provides a foundational structure for developing this shared understanding and using a tool, such as the UDL Scan tool, provides initial support for this cooperative effort.

Authors’ Note The content does not necessarily represent the policy of the U.S. Department of Education, and you should not assume endorsement by the Federal Government. Project Officer, Celia Rosenquist.

Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The contents of this paper were developed under a grant from the U.S. Department of Education [#H327U110011].

Basham J., Gardner J. (2010). Measuring universal design for learning. Special Education Technology Practice, 12, 1519. Google Scholar
Basham J. D., Marino M. T. (2013). Understanding STEM education and supporting students through universal design for learning. Teaching Exceptional Children, 45, 8. doi:10.1177/004005991304500401 Google Scholar Abstract
Basham J. D., Smith S. J., Greer D. L., Marino M. T. (2013). The scaled arrival of K–12 online education: Emerging realities and implications for the future of education. Journal of Education, 193, 5159. Google Scholar
Basham J. D., Stahl S., Ortiz K., Rice M. F., Smith S. (2015). Equity matters: Digital & online learning for students with disabilities. Lawrence, KS: Center on Online Learning and Students with Disabilities. Google Scholar
Brooke J. (1996). SUS: A “quick and dirty” usability scale. In Jordan P. W., Thomas B., Weerdmeester B. A., McClellan A. L. (Eds.), Usability evaluation in industry (pp. 189194). London, England: Taylor and Francis. Google Scholar
CAST. (2011). Universal design for learning guidelines version 2.0. Wakefield, MA: Author. Google Scholar
Christensen C. M., Horn M. B., Staker H. (2013). Is K–12 blended-learning disruptive? An introduction to the theory of hybrids. Lexington, MA: Clayton Christensen Institute for Disruptive Innovation. Retrieved August 5, 2013, from http://www.christenseninstitute.org/ Google Scholar
Dalton B., Proctor C. P., Uccelli P., Mo E., Snow C. E. (2011). Designing for diversity: The role of reading strategies and interactive vocabulary in a digital reading environment for fifth-grade monolingual English and bilingual students. Journal of Literacy Research, 43, 68100. doi:10.1177/1086296X10397872 Google Scholar Abstract
Denning J., Pera M. S., Ng Y. K. (2016). A readability level prediction tool for K–12 books. Journal of the Association for Information Science and Technology, 67, 550565. doi:10.1002/asi.23417 Google Scholar
European Commission. (2010). European unified approach for accessible lifelong learning (EU4ALL). Retrieved February 10, 2016, from http://cordis.europa.eu/project/rcn/80191_en.html Google Scholar
Every Student Succeeds Act of 2015, Pub. L. No. 114-95, § 4104 (2015).
Fleiss J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76, 378382. Google Scholar CrossRef
Flesch R. (1948). A new readability yardstick. Journal of Applied Psychology, 32, 221. Google Scholar Medline
Hashey A. I., Stahl S. (2014). Making online learning accessible for students with disabilities. Teaching Exceptional Children, 46, 7078. doi:10.117/0040059914528329 Google Scholar Abstract
Higher Education Opportunity Act (Public Law 110-315). (2008). Retrieved April 20, from http://www2.ed.gov/policy/highered/leg/hea08/index.html
International Digital Publishing Forum. (2014). EPUB standard (v. 3.0.1). Retrieved February 10, 2016, from http://idpf.org/epub/301 Google Scholar
Kennedy M. J., Thomas C. N., Meyer J. P., Alves K. D., Lloyd J. W. (2014). Using evidence-based multimedia to improve vocabulary performance of adolescents with LD A UDL approach. Learning Disability Quarterly, 37, 7186. doi:10.1177/0731948713507262 Google Scholar Abstract
Krippendorff K. (2004). Content analysis: An introduction to its methodology (2nd ed.). Beverly Hills, CA: Sage. Google Scholar
Marino M. T. (2009). Understanding how adolescents with reading difficulties utilize technology-based tools. Exceptionality, 17(2), 88102. doi:10.1080/09362830902805848 Google Scholar
Mosenthal P. B., Kirsch I. S. (1998). A new measure for assessing document complexity: The PMOSE/IKIRSCH document readability formula. Journal of Adolescent & Adult Literacy, 41, 638657. Google Scholar
Nagy W., Townsend D. (2012). Words as tools: Learning academic vocabulary as language acquisition. Reading Research Quarterly, 47, 91108. doi:10.1002/RRQ.011 Google Scholar CrossRef
National Educational Technology Plan. (2016). Future ready learning: Reimagining the role of technology in education. Office of Educational Technology, U.S. Department of Education. Retrieved January 4, 2016, from http://tech.ed.gov/files/2015/12/NETP16.pdf Google Scholar
O’Brien H. L., Toms E. G. (2008). What is user engagement? A conceptual framework for defining user engagement with technology. Journal of the American Society for Information Science and Technology, 59, 938955. doi:10.1002/asi.20801 Google Scholar
Office of Civil Rights. (2011). Asked questions about the June 29, 2010 Dear Colleague Letter (DCL). Retrieved February 10, 2016, from http://www2.ed.gov/about/offices/list/ocr/docs/dcl-ebook-faq-201105_pg3.html Google Scholar
Patrick S., Kennedy K., Powell A. (2013). Mean what you say: Defining and integrating personalized, blended and competency education. Vienna, VA: International Association for K–12 Online Learning. Retrieved June 8, 2014, from http://www.inacol.org/resource/mean-what-you-say-defining-and-integrating-personalized-blended-and-competency-education/ Google Scholar
Proctor C. P., Dalton D., Uccelli P., Biancarosa G., Mo E., Snow C. E., Neugebauer S. (2011). Improving Comprehension Online (ICON): Effects of deep vocabulary instruction with bilingual and monolingual fifth graders. Reading and Writing: An Interdisciplinary Journal, 24, 517544. Google Scholar
Qualtrics Labs, Inc. (2012). Qualtrics [software] (Version 12,018). Provo, UT: Author.
Rappolt-Schlichtmann G., Daley S. G., Lim S., Lapinski S., Robinson K. H., Johnson M. (2013). Universal Design for Learning and elementary school science: Exploring the efficacy, use, and perceptions of a web-based science notebook. Journal of Educational Psychology, 105, 1210. doi:10.1037/a0033217 Google Scholar
Rice M. F., Carter R. A.Jr. (2015a). “When we talk about compliance, it’s because we lived it.” – Online educators’ roles in supporting students with disabilities. Online Learning Journal, 19, 1836. Google Scholar
Rice M. F., Carter R. A.Jr. (2015b). With new eyes: Online teachers’ sacred stories of students with disabilities. In Rice M. F. (Ed.), Exploring pedagogies for diverse learners online (pp. 209230). Bingley, UK: Emerald Group. Google Scholar
Rose R. (2007). Access and equity in online classes and virtual schools. Vienna, VA: International Association for K–12 Online Learning (iNACOL). Google Scholar
Rose R. (2014). Access and equity for all learners in blended and online education. Vienna, VA: International Association for K–12 Online Learning (iNACOL). Google Scholar
Schneps M. H., Ruel J., Sonnert G., Dussault M., Griffin M., Sadler P. M. (2014). Conceptualizing astronomical scale: Virtual simulations on handheld tablet computers reverse misconceptions. Computers & Education, 70, 269280. doi:10.1016/j.compedu.2013.009.001 Google Scholar
Schneps M. H., Thomson J. M., Chen C., Sonnert G., Pomplun M. (2013). E-readers are more effective than paper for some with dyslexia. PLoS ONE, 8, e75634. doi:10.1371/journal.pone.0075634 Google Scholar
Section 508 of the Rehabilitation Act of 1973 (1998, amended). 29 U.S.C. § 794(d). Retrieved February 10, 2016, from http://www.section508.gov/content/learn/laws-and-policies
Smith S. (2016). Invited in: Measuring UDL in online learning. Lawrence, KS: Center on Online Learning and Students with Disabilities. Retrieved February 12, 2016, from http://centerononlinelearning.org/wp-content/uploads/udl-scan-full-report.pdf Google Scholar
Smith S. J., Basham J. D. (2014). Designing online learning opportunities for students with disabilities. Teaching Exceptional Children, 46, 127. doi:10.1177/0040059914530102 Google Scholar Abstract
UDL-IRN. (2011). Critical elements of UDL in instruction (Version 1.1). Lawrence, KS: Author. Google Scholar
Valencia S. W., Wixson K. K., Pearson P. D. (2014). Putting text complexity in context. The Elementary School Journal, 115, 270289. doi:10.1086/678296 Google Scholar
World Wide Web Consortium. (2005). Retrieved from https://www.w3.org/WAI/intro/accessibility.php
World Wide Web Consortium. (2014). Web accessibility initiative. Retrieved February 10, 2016, from https://www.w3.org/WAI/guid-tech.html Google Scholar
Yesilada Y., Brajnik G., Vigo M., Harper S. (2012, 4). Understanding web accessibility and its drivers. Proceedings of the international cross-disciplinary conference on web accessibility (pp. 1928). Lyon France: ACM Press. Google Scholar
« Previous PageNext Page »

Blog at WordPress.com.