As I mentioned this other day, I am currently leading CEET Meet: Teaching At A Distance. The actual Meet is designed in a similar fashion to the Virtual Schooling MOOC that I facilitated this past fall. Follow a brief introductory topic and some technical materials, the first real topic is “The Designer of K-12 Online Learning” and the activity for that topic asks participants to:
On your blog, post an entry where you:
Critique the iNACOL National Standards for Quality Online Courses based on the literature related to asynchronous course design (both K-12 and higher education).
Use the iNACOL National Standards for Quality Online Courses to review one (1) course offered by a K-12 online learning program.
Make sure to tweet the title and a link to your blog entry.
This is my entry to model that activity.
I choose to post an entry where I “critique the iNACOL National Standards for Quality Online Courses based on the literature related to asynchronous course design (both K-12 and higher education).” Although, I want to take a more general approach and critique the standards based on the process of validating standards.
The normal process that one would undertake to create a set of standards would be to begin by scanning the research literature to determine what we could say about a topic based on methodologically reliable and valid research. In the case of online course design for younger learners, we would look through the instructional design research literature – particularly what is known about the K-12 environment. Using this literature review, we would begin the process of writing standards.
Once we had an initial list of standards, we would submit that list to a group of experts in the field to get their feedback on this list. We would be asking them, based on their expert opinion, which of the items from the research they felt were sufficient, lacking or absent. We would go through this review process on multiple occasions or cycles until we had a list of standards that the majority – hopefully all – of the experts agreed upon. In some instances, researchers would take this expert reviewed list and go back to the research literature to look for more evidence to support any of the revisions made by the experts.
After finishing these steps, the standards would be used to create an instrument or rubric. Multiple reviewers would use the instrument and independently apply it to several online courses. Researchers would use these independent evaluations to determine if there was inter-rater reliability on all of the items. Generally speaking items that have a 0.8 reliability level are acceptable and should be maintained in the instrument. Items that have less than a 0.8 reliability should be revised or discarded. This process should be repeated until the entire instrument has at least a 0.8 reliability level. Then you have a validated instrument based on the standards that have been created.
The problem with the iNACOL online course design standards is that the organization simply reviewed existing course design standards – some of which were based on research and some of which weren’t. They then adopted one set of these standards (i.e., the Southern Regional Education Board’s [SREB] standards) and added some items in because of their involvement with the Partnership for Twenty-First Century Skills. One of the issues is that if the SREB standards were based on any research that information has never been published anywhere. Neither the SREB or iNACOL standards have ever had any systematic literature search. Neither has had any systematic validation study. So to use them at this stage is to use an instrument that may or may not actually measure effective online course design. And that is a problem.