Virtual School Meanderings

February 22, 2017

Article Notice – Universal Design for Learning: Scanning for Alignment in K–12 Blended and Fully Online Learning Materials

As I indicated yesterday in the Journal of Special Education Technology – Special Issue: Emerging Practices in K-12 Online Learning: Implications for Students with Disabilities entry, I’m posting the article notices from this special issue this week.

In the process of evaluating online learning products for accessibility, researchers in the Center on Online Learning and Students with Disabilities concluded that most often consultation guides and assessment tools were useful in determining sensory accessibility but did not extend to critical aspects of learning within the Universal Design for Learning (UDL) framework. To help fill this void in assessment, researchers created the UDL Scan tool to examine online learning products alignment to the UDL framework. This article provides an overview of how accessibility has been historically measured and introduces the need to move beyond the traditional understanding of accessibility to a broader UDL-based lens. With this understanding, a UDL Scan tool was developed and validated to investigate the alignment of online learning content to UDL. This article will present the process of development, the validation, and discuss how the measurements provide critical benchmarks for educators and industry as they adopt new online learning systems.

Although blended and fully online K–12 learning opportunities have grown in popularity, investigations into the central component of online learning are limited. Although there are a number of online and blended learning models (Christensen, Horn, & Staker, 2013) that alter the online learning experience for students, the constant design feature is that the significant majority (up to 90%) of K–12 online learning is instructed via prepackaged content and/or curriculum (Patrick, Kennedy, & Powell, 2013). Thus, unless districts or teachers invest time in designing learning experiences tailored to the individual learners, students are learning from materials that are likely developed by an outside vendor. Learners in blended or fully online environments interact with these prepackaged materials throughout their entire instructional experience, often from initial instruction through assessment.

As highlighted in Smith and Basham (2014), the role of the teacher in K–12 online environments is different from that of a traditional brick-and-mortar teacher. With the wide variety of K–12 online environments (e.g., fully online, blended, supplemental, personalized), the role of the teacher varies based on a number of factors associated with the learning environment. Minimally the learning environment is comprised of the learner, the adopted online system or various systems, the physical environment (e.g., an active classroom, a computer lab with 100 other students, a desk at home, a kitchen table, a couch at home), other individuals, if any, within the environment (e.g., adults, other learners, caregivers). Importantly, depending on the online learning model, adopted online system, and the expectations of the environment, research has indicated that often the primary role of a traditional teacher as the instructor is replaced by that of an online system (Rice & Carter, 2015a, 2015b).

What is unknown by those outside K–12 online education is that school districts and classroom teachers typically do not develop their own lessons for many online environments (Rice & Carter, 2015a; Smith & Basham, 2014). The investment of time and resources required by school districts or the classroom teacher to create online content is often simply too prohibitive. The development of online curriculum and discipline-specific content places additional demands on resources that are often already overwhelmed. Instead, the materials are typically developed by and purchased from vendors who offer prepackaged learning products at a more reasonable cost.

These online products come in the form of digital lessons, activities, and resources, structuring the learning experience and directing what the student completes on a daily basis and across the entire course. The teacher is the instructor of record, but the vendor-based digital lesson and digital system drive the learning experience through specific lessons, activities, accompanying assessments, and the predetermined path for subsequent lesson completion (Basham, Stahl, Ortiz, Rice, & Smith, 2015; Rice & Carter, 2015a). In essence, the digital lesson/material offers the actual learning experience for many blended and fully online learners, and any teacher actions supplement this experience (Rice & Carter, 2015a, 2015b).

Although the role of the online teacher may be disturbing for some, it is not the primary focus of this article. Of course, both the roles of the teacher and the online system are dynamic, based on environmental factors as well as innovations in technology (e.g., machine learning, artificial intelligence, intelligent agents). Nonetheless, the transformation of the teacher’s role in the online environment provides credence for further research in a number of areas. For instance, although K–12 online learning has received increased attention with research examining student outcomes, the examination of the prepackaged online content—the primary element of the K–12 online learning experience—has not received adequate attention (Smith & Basham, 2014). Research on the effects of prepackaged digital materials on student learning, specifically for the struggling learner and those with identified disabilities, is not represented in current research efforts.

Through research conducted in the Center on Online Learning and Students with Disabilities (Center), this article highlights the review of K–12 digital learning curricula and content within online learning systems. The article begins with a brief overview of accessibility guidelines for digital materials. We then describe some of the limitations for using only these standards in determining effectiveness of online learning curricula and associated content for students, especially for students with disabilities. Specifically, it is argued that using the Universal Design for Learning (UDL) framework as specified in the Every Student Succeeds Act (ESSA, 2015) along with current accessibility guidelines provides a stronger basis for the review of online learning materials. Finally, the description, development, and validation of a measurement tool used to measure the alignment of online learning systems to the UDL framework is presented. It is hoped this article will encourage further research and dialogue about the design and implementation of digitally driven K–12 learning environments for students with disabilities.

As growth in K–12 online learning experiences has increased, so has the number of struggling students and their peers with disabilities who are enrolled in online learning (Basham, Smith, Greer, & Marino, 2013). The inclusion of these students in both blended and fully online courses has demanded reflection and reconsideration of the appropriateness of the content and overall instruction. The recent policy scan presented in the Center’s publication, Equity Matters: Digital and Online Learning for Students with Disabilities, noted that only 36% of states guarantee that their K–12 environments are accessible for students with disabilities (Basham et al., 2015). Moreover, the lack of required data, as well as data sharing, on students in these online environments makes it difficult to ascertain the impact of these prepackaged learning materials on student outcomes. Thus, the growth in numbers, combined with the lack of guaranteed accessibility, requires a determination in the accessibility and, more importantly, usability, and even learnability of these digital materials, lessons, activities, and assessments for all students.

Accessibility Standards and Guidelines

The rights of all users to access digital content actually precede the recent trend in K–12 blended and fully online learning. In the United States, the amended Section 508 (1998) of the Workforce Rehabilitation Act of 1973 enhances access to broadband (e.g., Internet, online learning) technology and services for individuals with disabilities. Additional standards have followed, including the World Wide Web Consortium (W3C) accessibility guidelines for the W3C’s Web Accessibility Initiative (2014) and the International Digital Publishing Forum’s (IDPF) EPUB (2014) content publication standards. Outside the United States, the European Unified Approach for Accessible Lifelong Learning (EU4ALL, 2010) initiated the concept of accessible lifelong learning and the elimination of barriers to the interlinked worlds of education and work through the use of appropriate digital technologies.

There are two primary definitions that are frequently used in defining web or digital accessibility. They include (1) accessibility means that people with disabilities can use the web—people with disabilities can perceive, understand, navigate, interact, and contribute with or to the web (World Wide Web Consortium, 2005) and (2) technology is accessible if it can be used as effectively by people with disabilities as by those without (Yesilada, Brajnik, Vigo, & Harper, 2012). These definitions, combined with the standards and guidelines, shape the current measures used by digital material developers and school district personnel to determine whether K–12 content is appropriate for those with disabilities.

The application of the accessibility standards has sought to promote accessible digital designs for materials and navigation of the learning system. For example, in an applied sense, the standards promote design tips which include providing the text equivalent or closed captioning for animation and video content, color contrast and appropriate font size, transcripts of all audio and accompanying descriptions for any video, and frequent accessibility testing during and after the digital content development and overall course design (W3C, 2014). These features target alternate means of accessing the digital materials. The standards then require an alternate format, for example, supporting an audio file with a complete transcript or closed captioning the audio portion of a video.

Focused on ensuring the online K–12 marketplace had a minimal standard of accessibility, pioneers such as Rose (2007) wrote a report for the International Association for K–12 Online Learning (iNACOL) calling on developers and providers to meet the basic accessibility standards. With a primary focus on sensory and physical accessibility, Rose focused his report on the Office of Civil Rights’ (OCR) definition of accessibility which extends Section 508 guidelines to technology accommodations in order to access educational opportunity, and do so in a timely manner. OCR clarified the specific legal requirements specific to digital curriculum, which applies to the K–12 blended and online classroom by stating:

equal opportunity, equal treatment, and the obligation to make accommodations or modifications to avoid disability based discrimination—also apply to elementary and secondary schools under the general nondiscrimination provisions in Section 504 and the ADA. The application of these principles to elementary and secondary schools is also supported by the requirement to provide a free appropriate public education (FAPE) to students with disabilities. (OCR, 2011).

Although the OCR guidance document ensures that digital materials, delivery systems, and devices are accessible, the parameters of accessibility are restricted to sensory and physical consideration.

In an updated report for iNACOL, Rose (2014) again primarily focused on sensory and physical accessibility. While the report makes reference to UDL, the emphasis is on the accessibility portions of the UDL guidelines with the foundational focus on Section 504 and 508 provisions for digital information, with an added reference for access determinations to be based on W3C’s Web Content Accessibility guidelines. Recommendations, for example, suggest that OCR alignment constitutes closed captioning for animation and video products, tagging all graphics with corresponding text, carefully selecting and using color, and ensuring that all graphics have defined alt tags to allow for screen reader access. These approaches reinforce an accessibility evaluation process targeting a limited population of individuals who require these features or modifications.

To provide developers and educators with guidance in determining digital accessibility (especially alignment to Section 508 expectations), Hashey and Stahl (2014) introduced the Voluntary Product Accessibility Template (VPAT). Created to share specific product accessibility information with educators and other professionals seeking to acquire accessible digital materials, the VPAT examines devices, software, and digital materials to better determine how these materials can be used by those with visual impairments, hearing impairments, or limited mobility. The VPAT provides a thorough and detailed overview of the digital product and can make comparisons much easier for the user to understand and apply when making accessibility decisions.

Measuring Accessibility

As noted in Hashey and Stahl (2014) as a Center resource, the VPAT table was created to offer a quick review of more than 70 products used in K–12 online learning (see http://centerononlinelearning.org/resources/vpat/). The Center’s review, Quick Guide to Accessible Products in Education, offers a visual reference to the extent to which each product is accessible. The interactive VPAT table available through the Center’s website offers educators, developers, and other interested parties an understanding (or at least a starting point) of how to determine whether a product is appropriate for the K–12 learner, especially those with sensory and physical disabilities. As a standard, the VPAT does not provide for the majority of students with disabilities who have cognitive, learning, attention, or behavioral needs.

Moving Beyond Traditional Accessibility

Traditional accessibility concentrates on multiple formats but not alterations to the learning demands of the digital material. For example, providing an accessible digital text (e.g., online textbook, digital text-based lesson) often requires formatting the digital text to allow for a text-to-speech application to automatically read the text for individuals with print impairments (e.g., someone who is blind and cannot see the text). Accessibility, in this instance, does not measure for the potential of supports to encourage greater readability of text (Flesch, 1948; Mosenthal & Kircsh, 1998; Valencia, Wixson, & Pearson, 2014) or the ability to match digital text content to individual learners, based on actual readability and other associated metadata (Denning, Pera, & Ng, 2016). Moreover, this traditional understanding of accessibility, beyond sensory accessibility, neglects to identify other critical elements for supporting overall learning and comprehension. For instance, these scans do not measure potential for engagement (O’Brien & Toms, 2008), ability to resize the amount of text in lines (Schneps, Thomson, Chen, Sonnert, & Pomplun, 2013), supports for reducing the demands of content-specific vocabulary (Nagy & Townsend, 2012), use of multiple forms of media in digital content such as interactive simulations (Schneps et al., 2014).

Thus, traditional accessibility standards for digital materials address sensory and physical challenges, but are limited in how they support cognitive and learning barriers experienced by individuals with identified disabilities, along with their peers who may not have an identified disability but who struggle with reading, processing, memory, and similar cognitive demands associated with learning. Moreover, traditional notions of accessibility assume an intermediary will interact with content to make it more usable and/or that a teacher will use the materials in such a way to support the learning process. Unfortunately, as aforementioned, within many K–12 online learning environments teachers have very little control over the actual content, its delivery, and associated instruction within the system. Thus, wherein accessibility standards are an important starting point for considering online content, the current understandings and assumptions of accessibility fall short when considering the reality of K–12 online learning practice.

In order to ensure accessibility for learners, the cognitive accessibility and learnability of content within an associated tool should also be evaluated. Because most school districts are purchasing prepackaged curriculum and content from vendors, stakeholders in the purchase of those products need the relevant guidelines and tools to evaluate those products for appropriateness and accessibility for learners. While the Center’s VPAT evaluation of vendor-developed K–12 online learning products were effective in ensuring that the products conformed to basic accessibility guidelines and policies, this conformance alone falls short of assessing content and the associated systems for supporting usability or learnability of content in the learning process. As a starting point for assessing this usability, Center researchers determined that an analysis of K–12 blended and fully online learning should employ an evaluation of content adherence to the UDL framework. To measure accessibility using the UDL framework, Center researchers developed the UDL Scan tool to measure alignment of online learning content and associated systems to the principles, guidelines, and checkpoints.

Enter a Broader Understanding of UDL

UDL is an instructional framework that is based on substantial amounts of scientifically based research (e.g., Dalton, Proctor, Uccelli, Mo, & Snow, 2011; Kennedy, Thomas, Meyer, Alves, & Lloyd, 2014; Marino, 2009; Proctor et al., 2011; Rappolt-Schlichtmann et al. 2013). As a scientifically based framework, UDL works to support the variability of all learners by both proactively and iteratively designing learning with a focus on the integration of providing multiple means of engagement, representation of information, and action and expression of understanding. The framework is defined within the Higher Education Opportunity Act (HEOA, 2008):

… [UDL is] a scientifically valid framework for guiding educational practice that—(A) provides flexibility in the ways information—is presented, in the ways students respond or demonstrate knowledge and skills, and in the ways students are engaged; and (B) reduces barriers in instruction, provides appropriate accommodations, supports, and challenges, and maintains high achievement expectations for all students, including students with disabilities and students who are limited English proficient.

More recently, UDL was highlighted in the ESSA (2015) as well as the National Educational Technology Plan (NETP, 2016) as a basis for designing as well as implementing learning environments, systems, and assessments for all learners, especially learners with disabilities. Specifically, the language in ESSA indicates that districts should ensure that use of technology is not only accessible for all learners, but that systems also align to the UDL framework.

As highlighted in Rose (2014), UDL is often viewed only in terms of accessibility. In reality, the framework of UDL provides a much broader perspective than accessibility alone. As highlighted by CAST (2011) in the UDL guidelines, the framework moves from ensuring basic accessibility to an advanced or even metacognitive state of learning. This is evident by viewing the guidelines from either top to bottom or bottom to top (depending on the version of the guidelines). In the traditional print edition of the guidelines (with the principle of representation to the left) under Providing Multiple Means for Representation, the guidelines move from perception (sensory), to clarifying and decoding information (basic learning input), and finally to supports for comprehension and generalization of information (more advanced learning). Basham and Marino (2013) and Rappolt-Schlichtmann et al. (2013) discuss how UDL can be applied in broader perspectives than simply accessibility.

As highlighted in Basham and Marino (2013), UDL applies an engineering-based perspective to the way learning environments, curriculum, instruction, instructional tools, and assessment are both designed and utilized. Specifically, UDL-based instruction should consider the four critical elements of UDL instruction (2011); these elements include having clear goals, inclusive and intentional planning for variability, providing flexible methods and materials, and timely progress monitoring. These elements can be integrated into a five-step backward instructional design process:

  1. Step 1: Establish clear outcomes
  2. Step 2: Anticipate the learner variability
  3. Step 3: Design measurable outcomes and assessment plan
  4. Step 4: Design the instructional experience
  5. Step 5: Evaluate and reflect on new understandings

To assist districts and teachers in the implementation of UDL, there must be guidance provided in how associated instructional materials and systems support UDL-based instruction. If teachers understand the type of learner variability that specific products could account for, they could then consider this understanding in the instructional design and implementation process. In a basic example, if a teacher knew a product only supported content understanding in English and she had learners that primarily learned in Spanish, then she would know there was a need to find a different product or take other measures to support representation of content. Thus, any guidance on UDL alignment would require the measurement of UDL in product systems.

Unfortunately, there have been minimal attempts in measuring UDL as an entire framework. In fact, Basham and Gardner (2010) discussed the complexity of measuring UDL as a design framework, rather than a specific strategy or practice that can be easily observed in the environment. They indicated that any tool attempting to measure UDL would have to be multifaceted and measure the proactive design as well as the implementation within an instructional environment. Given that online learning tools are often being proactively designed and developed separate from instruction, measuring the design of these products for alignment to UDL is a necessary step in ensuring that districts and teachers know that they are adopting tools that are not only accessible but that also support the implementation of UDL.

Researchers at the Center undertook developing a tool that could investigate online learning product alignment to UDL, one that focused on developing such a measurement tool. Specifically, this project sought to answer:

  1. Can a UDL Scan tool be developed and validated, one that adequately measures the alignment of an online instructional product or system to the UDL framework?
  2. Using a UDL Scan tool, what is the usability or feasibility of conducting product scans?

The development and validation of a tool to measure content and curriculum alignment to the UDL framework required a multiphase design involving (a) item generation, (b) pilot review, (c) content validation, and (d) an assessment of reliability and construct validity.

Development of the Tool

Initial development of the UDL Scan tool began with an analysis of the UDL principles, guidelines, and checkpoints as well as a review of existing rubrics and observation instruments. To create a tool appropriate for evaluating online learning products, the developers of the UDL Scan tool who are known experts in UDL met with other known experts, including senior personnel at CAST, to discuss the components that should comprise the tool. Based on these initial meetings, the developers crafted evaluation questions, organized around the UDL guidelines and checkpoints, to identify whether UDL-based features were present within a product (and to what degree). As the tool was revised and refined, the developers continued to seek feedback from the UDL experts to ensure that the tool comprehensively assessed the three primary principles, nine guidelines, and numerous checkpoints of the UDL framework. This process involved a thorough consideration of the purpose of each of the principles, guidelines, and checkpoints, considering the stated text, the intent of the text, examples of the text, and how it would be applied in the field, especially in the area of blended and fully online learning. From these examinations, items were developed to ensure correspondence to elements in the UDL framework (see Table 1).

Table

Table 1. UDL Scan Tool Items in Correspondence to UDL Checkpoints.

Table 1. UDL Scan Tool Items in Correspondence to UDL Checkpoints.

Note. UDL = Universal Design for Learning.

The UDL Scan tool was created using Qualtrics Labs, Inc. software, Version 12.018 of the Qualtrics Research Suite (Qualtrics Labs, 2012). Using the Qualtrics software allowed the developers to make the evaluation tool online accessible for users. It also allowed the developers to employ skip logic in the survey, to ensure greater usability and ease in use. Essentially, depending on how primary questions were answered, follow-up questions would only be asked as applicable. Because the UDL Scan tool was developed online, evaluators could explore and test the product while also answering questions about the product in a separate browser window.

Once the initial questions were developed, thorough testing was conducted. To test the evaluation tool, the developers met as a group to practice using the UDL Scan tool to evaluate two online learning products. The walkthrough allowed the developers to identify features that were not adequately assessed and to troubleshoot the tool during use. Revisions were made to ensure that the questions asked through the tool both clearly and adequately assessed whether UDL features were available in the products being evaluated during testing.

This extensive testing allowed the developers to more narrowly define the scope of the UDL Scan tool. Although the tool has potential value for evaluating isolated learning management systems (LMS) used to house content (e.g., Blackboard), the developers instead chose to focus the initial tool on products that provide instructional content.

To accompany the UDL Scan tool, developers created a training manual for users. The manual provides detailed information about the expectations of the reviewer and how to use the tool, descriptions and examples of UDL features, and a glossary of the terms that appear in the UDL tool. The manual serves as a guide for teaching evaluators how to use the tool and as a resource for evaluators.

The Instrument

The UDL Scan tool provides researchers and educators with a measurement tool to review online content systems for their potential to support learner accessibility and variability. Each UDL guideline and checkpoint was mapped to specific features within a content system. Each item on the UDL Scan tool aligns with one of UDL’s three principles, nine guidelines, and at least one checkpoint, measuring each of them for each lesson evaluated.

The scan tool consists of 37 initial items with a total of 46 unique response items, including a tool for measuring product usability. The tool intuitively branches users to the specific questions they need for a thorough evaluation of the materials being scanned. If the tool is completed in its entirety (all branching items), there are a total of 146 items.

The UDL Scan tool consists of multiple choice and Likert-type scale questions. Answers are submitted online. The scan tool begins with a series of questions designed to gather information about the evaluator, including the type of browser the evaluator is using to examine the product. The initial questions also gather information about the types of products and lessons being evaluated. The subsequent questions are broken into sections associated with the UDL principles, guidelines, and checkpoints. Each section begins with a multiple-choice question designed to determine whether a product has features that incorporate a specific UDL checkpoint. If the evaluator determines the product does include, or might include, features related to that checkpoint, the evaluator is provided with more specific questions to identify the degree to which those features are accessible. However, if the evaluator determines that the UDL checkpoint is not a part of the product, the scan tool is designed to move the evaluator to the next UDL checkpoint so as to avoid asking the evaluator irrelevant questions. The questions are designed to determine how frequently aspects of a feature are available and to pinpoint specifically which examples of a feature are accessible to the users. For example, one question asks the evaluator to indicate on a Likert-type scale how frequently the product illustrates content through videos, audio, and still images.

Along with assessing the extent of UDL features available within online learning products, the scan tool also is used to assess the usability of the product. Specifically, the UDL Scan tool includes a set of Likert-type scale questions adapted from the System Usability Scale (Brooke, 1996). These questions are designed to measure how easy, or complex, a specific product is to use.

Procedures

To assess the interrater reliability of the scan tool, the following procedures were followed. Three graduate research assistants evaluated three product systems. Ten lessons within each of those product systems were randomly chosen and then evaluated for a total of 30 lessons across three products. Prior to evaluating the product systems, the research assistants attended a training session to learn how to use the Scan tool. During this training session, the trainer demonstrated how to access and use the tool and the online learning product systems being evaluated. The trainer also reviewed the training manual with the evaluators to ensure that the reviewers understood what was expected of them. During the training, the graduate research assistants were given an opportunity to ask questions, explore the online learning products, and practice using the scan tool. They also received a copy of the training manual to serve as a resource while using the Scan tool.

Data Analysis

Having three different raters utilize the scan tool across three product systems and 30 different lessons allowed for assessment of the interrater reliability. The interrater reliability analysis measured whether the three graduate research assistants evaluated each of the lessons in the same way. Krippendorff’s α (Krippendorff, 2004) and Fleiss’s κ (Fleiss, 1971) were calculated to determine reliability among raters. When 100% agreement was achieved across all three raters for each of the 10 lessons in the system, Krippendorff’s α and Fleiss’s κ could not be calculated due to no variation in the ratings. Because the cases of perfect agreement were not included in calculating the mean and median reliability values, they were biased toward less agreement. The degree of bias is not known and cannot be measured. Fleiss’s κ was calculated only when there was no missing rating; this amounted to most the cases in the initial system and some cases in the subsequent systems.

Interrater Reliability

In general, interrater reliability was supported in each of the three systems—a high percentage of agreement (66–71% in total, median = 90–100%) was observed; and Fleiss’s κ was greater than .20 (mean = .26–.40, median = .28–.40), suggesting fair agreement among the three raters (see Table 2). When only the initial items were examined, the total percentage of agreement slightly increased but the κ values dropped, but still fair in the System 1 and System 2. However, Krippendorff’s α was very low in all the three systems (e.g., negative mean and median α values), suggesting that disagreement between the three raters is systematic and, therefore, greater than what can be generally expected by chance (Krippendorff, 2004). In reality, disagreement was not systematic due to the small number of digital lessons evaluated and overlap of design principles characteristic in these materials.

Table

Table 2. Interrater Reliability Within and Across All Three Product Systems.

Table 2. Interrater Reliability Within and Across All Three Product Systems.

aThirty-seven initial items that were always rated regardless of system.

Across the country, online learning is growing at a rapid pace and a majority of the instructional products and tools used in these environments are purchased prepackaged from a vendor (Smith & Basham, 2014). The current focus on the traditional understanding of accessibility is critical to ensuring that learners with sensory and physical disabilities have basic access to these digital learning materials. Regrettably, while basic accessibility is still a need, current understanding of accessibility does little to support actual learning, especially in consideration of things such as cognitive accessibility. Within the United States, recent legislation (specifically ESSA, 2015) supports the need for districts to consider the implementation of UDL in the way they design and implement instruction. Understanding this need, this project sought to develop and test a tool for measuring a digital instructional product’s alignment to the UDL framework.

The outcomes of the project indicate that the UDL Scan tool demonstrated success in the measurement of UDL within digital instructional products. Moreover, the tool was developed in partnership with CAST, the founders of UDL. Thus, as an initial measurement tool, the UDL Scan tool demonstrates potential for measuring UDL alignment in digital instructional products. Potential uses of this tool include release for wider consumption and use. Specifically, by providing districts access to this tool, they will be able to evaluate products during the product acquisition process. As a result, it is hoped that districts will make more informed decisions in the procurement of digital instructional products. It is foreseen that product developers may also use such a tool to support the design and eventual self-report of a product’s alignment with UDL. Optimistically, supporting UDL alignment will advance the field’s understanding and acceptance of basic accessibility to a more advanced consideration of building product and systems with a focus on all learners.

Implications for Practice

Through the use of the UDL Scan tool, teachers have the potential to develop a more nuanced understanding of learner variability and how tools associated with instructional practices may help adequately support this variability. Moving beyond the basic understandings of accessibility, teachers also have the ability to take a larger role in ensuring all learners are actively engaging and demonstrating the desired outcomes in the learning process. Specifically, teachers can make better informed decisions about how to design, implement, and test learning experiences that meet the needs of the individual learners. From a teacher development perspective, this would advance a teacher’s need to more fully understand the conceptual, practical, and testable underpinnings of UDL and the instructional design process, thus enhancing a teacher’s ability to take on the mind-set and operational status of a learning engineer (Basham & Marino, 2013).

Implications for Future Research

The development and validation of the UDL Scan tool advances the field’s ability to more adequately assess and research the framework of UDL. Since conducting this initial study, the UDL Scan tool has been used within the Center to measure alignment on more than 1,000 individual pieces of content. Using the tool, researchers have measured alignment of certain popular, blended, and fully online content to the UDL framework (Smith, 2016). The goal was to help understand whether vendor-created K–12 online lessons were both accessible and appropriate for all students, especially those with disabilities. Finally, a next step would be to measure the instructional experience within these products. While a product may have alignment (or lack thereof) to UDL, there is also need to measure how a product provides actual instruction. Such an addition to the UDL Scan tool would allow users to evaluate whether online instructional systems (e.g., K12, Khan Academy) align to evidence- and/or research-based instructional practices.

Limitations

This study sought to research and test a UDL Scan tool for measuring the basic alignment of a digital instructional product to UDL. The Scan tool was tested with digital products that provided students with instructional materials rather than an LMS (e.g., Blackboard). Thus, the UDL Scan tool was not designed or tested in the ability to measure an LMS or Content Management System (e.g., WordPress) without embedded content or a designed instructional sequence. The tool also was not designed to measure a brick-and-mortar instructional lesson. Users would be cautioned in attempting to measure alignment of any instructional experience beyond the intended use of the tool. Finally, although the UDL Scan tool has demonstrated consistent findings across further scans, this initial study only used 30 instructional lessons, thus some caution must be considered given small number of these lessons when interpreting the mean and median of α and κ values.

The ability to move the field of K–12 online learning beyond the basic understandings of accessibility to a more advanced understanding of UDL will support better online learning materials for all learners, especially those learners with disabilities. As the K–12 education system moves increasingly online, it becomes more dependent on the educational technology industry to support the design of digital instructional materials and experiences. Thus, it is important for educators, researchers, and the industry to develop a shared understanding as well as expectations for these online materials and systems. The UDL framework provides a foundational structure for developing this shared understanding and using a tool, such as the UDL Scan tool, provides initial support for this cooperative effort.

Authors’ Note The content does not necessarily represent the policy of the U.S. Department of Education, and you should not assume endorsement by the Federal Government. Project Officer, Celia Rosenquist.

Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The contents of this paper were developed under a grant from the U.S. Department of Education [#H327U110011].

Basham J., Gardner J. (2010). Measuring universal design for learning. Special Education Technology Practice, 12, 1519. Google Scholar
Basham J. D., Marino M. T. (2013). Understanding STEM education and supporting students through universal design for learning. Teaching Exceptional Children, 45, 8. doi:10.1177/004005991304500401 Google Scholar Abstract
Basham J. D., Smith S. J., Greer D. L., Marino M. T. (2013). The scaled arrival of K–12 online education: Emerging realities and implications for the future of education. Journal of Education, 193, 5159. Google Scholar
Basham J. D., Stahl S., Ortiz K., Rice M. F., Smith S. (2015). Equity matters: Digital & online learning for students with disabilities. Lawrence, KS: Center on Online Learning and Students with Disabilities. Google Scholar
Brooke J. (1996). SUS: A “quick and dirty” usability scale. In Jordan P. W., Thomas B., Weerdmeester B. A., McClellan A. L. (Eds.), Usability evaluation in industry (pp. 189194). London, England: Taylor and Francis. Google Scholar
CAST. (2011). Universal design for learning guidelines version 2.0. Wakefield, MA: Author. Google Scholar
Christensen C. M., Horn M. B., Staker H. (2013). Is K–12 blended-learning disruptive? An introduction to the theory of hybrids. Lexington, MA: Clayton Christensen Institute for Disruptive Innovation. Retrieved August 5, 2013, from http://www.christenseninstitute.org/ Google Scholar
Dalton B., Proctor C. P., Uccelli P., Mo E., Snow C. E. (2011). Designing for diversity: The role of reading strategies and interactive vocabulary in a digital reading environment for fifth-grade monolingual English and bilingual students. Journal of Literacy Research, 43, 68100. doi:10.1177/1086296X10397872 Google Scholar Abstract
Denning J., Pera M. S., Ng Y. K. (2016). A readability level prediction tool for K–12 books. Journal of the Association for Information Science and Technology, 67, 550565. doi:10.1002/asi.23417 Google Scholar
European Commission. (2010). European unified approach for accessible lifelong learning (EU4ALL). Retrieved February 10, 2016, from http://cordis.europa.eu/project/rcn/80191_en.html Google Scholar
Every Student Succeeds Act of 2015, Pub. L. No. 114-95, § 4104 (2015).
Fleiss J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76, 378382. Google Scholar CrossRef
Flesch R. (1948). A new readability yardstick. Journal of Applied Psychology, 32, 221. Google Scholar Medline
Hashey A. I., Stahl S. (2014). Making online learning accessible for students with disabilities. Teaching Exceptional Children, 46, 7078. doi:10.117/0040059914528329 Google Scholar Abstract
Higher Education Opportunity Act (Public Law 110-315). (2008). Retrieved April 20, from http://www2.ed.gov/policy/highered/leg/hea08/index.html
International Digital Publishing Forum. (2014). EPUB standard (v. 3.0.1). Retrieved February 10, 2016, from http://idpf.org/epub/301 Google Scholar
Kennedy M. J., Thomas C. N., Meyer J. P., Alves K. D., Lloyd J. W. (2014). Using evidence-based multimedia to improve vocabulary performance of adolescents with LD A UDL approach. Learning Disability Quarterly, 37, 7186. doi:10.1177/0731948713507262 Google Scholar Abstract
Krippendorff K. (2004). Content analysis: An introduction to its methodology (2nd ed.). Beverly Hills, CA: Sage. Google Scholar
Marino M. T. (2009). Understanding how adolescents with reading difficulties utilize technology-based tools. Exceptionality, 17(2), 88102. doi:10.1080/09362830902805848 Google Scholar
Mosenthal P. B., Kirsch I. S. (1998). A new measure for assessing document complexity: The PMOSE/IKIRSCH document readability formula. Journal of Adolescent & Adult Literacy, 41, 638657. Google Scholar
Nagy W., Townsend D. (2012). Words as tools: Learning academic vocabulary as language acquisition. Reading Research Quarterly, 47, 91108. doi:10.1002/RRQ.011 Google Scholar CrossRef
National Educational Technology Plan. (2016). Future ready learning: Reimagining the role of technology in education. Office of Educational Technology, U.S. Department of Education. Retrieved January 4, 2016, from http://tech.ed.gov/files/2015/12/NETP16.pdf Google Scholar
O’Brien H. L., Toms E. G. (2008). What is user engagement? A conceptual framework for defining user engagement with technology. Journal of the American Society for Information Science and Technology, 59, 938955. doi:10.1002/asi.20801 Google Scholar
Office of Civil Rights. (2011). Asked questions about the June 29, 2010 Dear Colleague Letter (DCL). Retrieved February 10, 2016, from http://www2.ed.gov/about/offices/list/ocr/docs/dcl-ebook-faq-201105_pg3.html Google Scholar
Patrick S., Kennedy K., Powell A. (2013). Mean what you say: Defining and integrating personalized, blended and competency education. Vienna, VA: International Association for K–12 Online Learning. Retrieved June 8, 2014, from http://www.inacol.org/resource/mean-what-you-say-defining-and-integrating-personalized-blended-and-competency-education/ Google Scholar
Proctor C. P., Dalton D., Uccelli P., Biancarosa G., Mo E., Snow C. E., Neugebauer S. (2011). Improving Comprehension Online (ICON): Effects of deep vocabulary instruction with bilingual and monolingual fifth graders. Reading and Writing: An Interdisciplinary Journal, 24, 517544. Google Scholar
Qualtrics Labs, Inc. (2012). Qualtrics [software] (Version 12,018). Provo, UT: Author.
Rappolt-Schlichtmann G., Daley S. G., Lim S., Lapinski S., Robinson K. H., Johnson M. (2013). Universal Design for Learning and elementary school science: Exploring the efficacy, use, and perceptions of a web-based science notebook. Journal of Educational Psychology, 105, 1210. doi:10.1037/a0033217 Google Scholar
Rice M. F., Carter R. A.Jr. (2015a). “When we talk about compliance, it’s because we lived it.” – Online educators’ roles in supporting students with disabilities. Online Learning Journal, 19, 1836. Google Scholar
Rice M. F., Carter R. A.Jr. (2015b). With new eyes: Online teachers’ sacred stories of students with disabilities. In Rice M. F. (Ed.), Exploring pedagogies for diverse learners online (pp. 209230). Bingley, UK: Emerald Group. Google Scholar
Rose R. (2007). Access and equity in online classes and virtual schools. Vienna, VA: International Association for K–12 Online Learning (iNACOL). Google Scholar
Rose R. (2014). Access and equity for all learners in blended and online education. Vienna, VA: International Association for K–12 Online Learning (iNACOL). Google Scholar
Schneps M. H., Ruel J., Sonnert G., Dussault M., Griffin M., Sadler P. M. (2014). Conceptualizing astronomical scale: Virtual simulations on handheld tablet computers reverse misconceptions. Computers & Education, 70, 269280. doi:10.1016/j.compedu.2013.009.001 Google Scholar
Schneps M. H., Thomson J. M., Chen C., Sonnert G., Pomplun M. (2013). E-readers are more effective than paper for some with dyslexia. PLoS ONE, 8, e75634. doi:10.1371/journal.pone.0075634 Google Scholar
Section 508 of the Rehabilitation Act of 1973 (1998, amended). 29 U.S.C. § 794(d). Retrieved February 10, 2016, from http://www.section508.gov/content/learn/laws-and-policies
Smith S. (2016). Invited in: Measuring UDL in online learning. Lawrence, KS: Center on Online Learning and Students with Disabilities. Retrieved February 12, 2016, from http://centerononlinelearning.org/wp-content/uploads/udl-scan-full-report.pdf Google Scholar
Smith S. J., Basham J. D. (2014). Designing online learning opportunities for students with disabilities. Teaching Exceptional Children, 46, 127. doi:10.1177/0040059914530102 Google Scholar Abstract
UDL-IRN. (2011). Critical elements of UDL in instruction (Version 1.1). Lawrence, KS: Author. Google Scholar
Valencia S. W., Wixson K. K., Pearson P. D. (2014). Putting text complexity in context. The Elementary School Journal, 115, 270289. doi:10.1086/678296 Google Scholar
World Wide Web Consortium. (2005). Retrieved from https://www.w3.org/WAI/intro/accessibility.php
World Wide Web Consortium. (2014). Web accessibility initiative. Retrieved February 10, 2016, from https://www.w3.org/WAI/guid-tech.html Google Scholar
Yesilada Y., Brajnik G., Vigo M., Harper S. (2012, 4). Understanding web accessibility and its drivers. Proceedings of the international cross-disciplinary conference on web accessibility (pp. 1928). Lyon France: ACM Press. Google Scholar

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: