Virtual School Meanderings

October 1, 2018

Quality Matters National Standards for Quality Online Teaching (K-12) Literature Review – What Do You Think?

So quite some time ago, a colleague and friend sent me the PDF of the Quality Matters (QM) National Standards for Quality Online Teaching (K-12): Literature Review and asked me:

Have you seen this literature review (more annotated bibliography) done recently by QM.  I’m glad that they are doing teaching competencies in addition to the course design ones.
Are you aware of any gaps or missing k-12 online teaching articles in this.  I wish they would have had your final list of K-12 online research to work with.
Also, I am interested in your opinion about the categories in the current standards . . .   Are there missing categories?  Are these comprehensive?

I didn’t really have time to respond to him in any real fashion at the time, but since my university has been closed a fair amount the past month for the high Jewish holidays, I wanted to take some of that time to draft a proper response.

Its kind of funny because as I scanned this document the first time, I thought to myself that this document is a good example of why K-12 practitioners don’t trust researchers.  If you look at the document, at no point does it specifically say what the purpose of the project is, but in the introduction and methodology they write:

This literature review has been conducted to inform the work of the National Standards for Quality Online Teaching revisions, a project led by a partnership between Quality Matters and the Virtual Learning Leadership Alliance. It includes a short summary of the relevant peer-reviewed literature followed by an alphabetical listing of the resources correlated to the National Standards for Quality Online Teaching in which they apply.

Searches, using keywords K-12, online learning, online teaching, and online programs, within the date range of 2014-2018 were completed. Only references from research-based publications were recorded.

The final step was an analysis of the research gathered in general relationship to the iNACOL standards, which resulted in a summary of the findings. (emphasis added)

So based on this, I’m guessing that the purpose of the document was to provide peer-reviewed, K-12 research that has some relationship to the iNACOL standards.

Now let’s practice our A-B-Cs and see how they did.  By that I mean let’s take a look at the items that were alphabetically included to see if they met the criteria listed above (i.e., peer-reviewed, research-based, and K-12).

Allison, C. (2015). The use of instructional videos in K-12 classrooms: A mixed method study (Doctoral dissertation, Indiana University of Pennsylvania). Retrieved from https://knowledge.library.iup.edu/cgi/viewcontent.cgi?article=1146&context=etd

Dissertations are definitely based on research, I would personally argue that getting through a dissertation committee is peer-review, and based on both the title and abstract it is focused on K-12.

Bailey, J., Schneider, C., & Vander Ark, T. (2013). Getting ready for online assessments. Digital Learning Now! Retrieved from http://www.digitallearningnow.com/we-content/uploads/2013/01/Getting-Ready-fo-Online-Asst.-Updated-Jan-2013.pdf

A report from an ideological think tank.  Not based on research and not peer-reviewed.

Barbour, M. K. (2017). The state of K-12 online learning. In J. G. Cibulka & B. S. Cooper (Eds.), Technology in the Classroom: Can It Improve Teaching and Student Learning in American Schools? (pp. 37-51). Lanham, MD: Rowman and Littlefield Education. Retrieved from https://www.academia.edu/35571422/ Barbour_M._K._2017._The_state_of_K-12_online_learning._In_J._G._Cibulka_and_B._S._Cooper_Eds._Technology_in_the_Classroom_Can_It_Improve_Teaching_and_Student_Learning_in_American_Schools_pp._37-51_._Lanham_MD

A book chapter, so not peer-reviewed and not based on actual research (i.e., it is a commentary of others’ research).

Barbour, M. K. (2012). Training teachers for a virtual school system: A call to action. In D. Polly, C. Mims, & K. Persichitte (Eds.), Creating technology-rich teach education programs: Key issues (pp. 499-517). Hershey, PA: IGI Global.

A book chapter, so not peer-reviewed and not based on actual research (i.e., it is a commentary of others’ research).

Barbour, M. K. (2007). Principles of effective web-based content for secondary school students: Teacher and developer perceptions. The Journal of Distance Education, 21(3), 93-114. Retrieved from http://www.jofde.ca/index.php/jde/article/view/30/11

A peer-reviewed journal article reporting original research focused on K-12.

Barbour, M., K., Grzebyk, T. W., & Eye, J. (2014). Any time, any place, any pace-really? Examining mobile learning in a virtual school environment. Turkish Online Journal of Distance Education, 15(1). Retrieved February 13, 2016 from http://fles.eric.ed.gov/fulltext/ EJ1042983.pdf

A peer-reviewed journal article reporting original research focused on K-12.

Barbour, M. K., & Mulcahy, D. (2008). How are they doing? Examining student achievement in virtual school. Education in Rural Australia, 18(2), 63-74.

A peer-reviewed journal article reporting original research focused on K-12.

Barbour, M. K., & Mulcahy, D. (2006). An inquiry into retention and achievement differences in campus based and based courses. Rural Educator, 27(3), 8-12.

A peer-reviewed journal article reporting original research focused on K-12.

Barbour, M. K., & Plough, C. (2012). Putting the social into online learning: Social networking in a cyber school. International Review of Research in Open and Distance Learning, 13(3), 1-18.

A peer-reviewed journal article reporting original research focused on K-12.

Barbour, M., & Plough, C. (2009). Social networking in cyberschooling: Helping to make online learning less isolating. TechTrends, 53(4), 56-60. Doi:10.1007/s11528-009-0307-5. Retrieved from http://edlab.tc.coumbia.edu/files/Barbour2009.pdf

A peer-reviewed journal article reporting original research focused on K-12.

Barbour, M. K., & Reeves, T. C. (2013). The reality of virtual schools: A review of the literature. Computers & Education 52(2), 402-416.

Forgetting that they got the year wrong by four years, this is a peer-reviewed journal article.  However, it is a literature review, so again not based on actual research (i.e., it is a review of others’ literature – some of which is research based and some of which isn’t).

Barbour, M. K., Siko, J., Gross, E., & Waddell, K. (2012). Virtually unprepared: Examining the preparation of K-12 online teachers. In R. Hartshorne (Ed.), Teacher education programs and online learning tools: Innovations in teacher preparation (pp. 60-81). Hershey: IGI Global.

A book chapter, so not peer-reviewed and not based on actual research (i.e., it is a commentary of others’ research).

Barbour, M. K., McLaren, A., & Zhang, L. (2012). It’s not that tough: Students speak about their online learning experiences. Turkish Online Journal of Distance Education, 13(2), 226-241.

A peer-reviewed journal article reporting original research focused on K-12.

Basham, J. D., Stahl, W., Ortiz, K. R., Rice, M. F., & Smith, S. J. (2015). Equity matters: Digital and online learning for students with disabilities. Lawrence, KS: The Center on Online Learning and Students with Disabilities. Retrieved from http://ht.ly/Vpmus

This is an annual report of the Center on Online Learning and Students with Disabilities – so not peer-reviewed.  The report itself suggests that it is “an initial understanding of what has been learned from preliminary explorations, interactions, and experiences that have taken place with the Center and its research partnerships, as well as from the limited published research base” (p. 12), and a review of the content of the following chapters reveals no empirical research data (i.e., not research-based).

Beese, J. (2014). Expanding learning opportunities for high school students with distance learning. The American Journal of Distance Education, 28(4), 292-304. doi:10.1080/108923647.2014.959343

A peer-reviewed journal article reporting original research focused on K-12.

Black, E., Dawson, K. & Ferdig, R. (2006). Predicting the Success and Failure of Online Education Students. In C. Crawford, R. Carlsen, K. McFerrin, J. Price, R. Weber & D. Willis (Eds.), Proceedings of SITE 2006–Society for Information Technology & Teacher Education International Conference (pp. 248-252). Orlando, Florida, USA: Association for the Advancement of Computing in Education (AACE). Retrieved January 31, 2018 from https://www.learntechlib.org/p/22041/.

While I would argue that a conference proceeding is a peer-reviewed document, as it comes from a peer reviewed conference proposal – or at least does in the case of SITE.  This is also cleared a research-based publication.  However, the abstract reads “This article discusses the application of an instrument capable of predicting student success and failure within virtual high school classrooms to a sample of graduate students enrolled in online education classes at a tier I research university” (emphasis added).  So the study is actually just using an instrument that was developed for K-12 students, but the actual study is on graduate students – so the research is not K-12 focused.

Blackburn, H. A. (2014). A mixed methods study: Assessing and understanding technology pedagogy and content knowledge among college level teaching faculty. (Doctoral dissertation, Drexel University). Retrieved from https://idea.library.drexel.edu/islandora/object/idea%3A4531/datastream/OBJ/download/A_Mixed_Methods_Study___Assessing_and_Understanding_Technology_Pedagogy_and_Content_Knowledge_Among_College_Level_Teaching_Faculty.pdf

A dissertation, so peer-reviewed and research-based, but…  The actual study is with online graduate students at “a private, non-profit, research university with the primary campus based in the Mid-Atlantic portion of the United States” (p. 36), so not K-12 at all.

Blaine, A. M. (2017). Interaction and presence in the secondary online classroom (Order No. 10684770). Available from ProQuest Dissertations & Theses A&I. (2008509399).

A dissertation, so peer-reviewed, research-based, and – in this case – focused on K-12.

Borup, J. (2016). Teacher perceptions of learner-learner engagement at a cyber high school. The International Review of Research in Open and Distributed Learning, 17(3). Doi:http://dx.doi.org/10.19173/irrodl.v1713.2361

A peer-reviewed journal article reporting original research focused on K-12.

Borup, J. A. (2013). Types, Subjects, and Purposes of K-12 Online Learning Interaction. All Theses and Dissertations. Paper 3711. http://scholarsarchive.byu.edu/etd/3711 

A dissertation, so peer-reviewed, research-based, and  focused on K-12.

Borup, J., Graham, C. R., & Drysdale, J. S. (2014). The nature of teacher engagement at an online high school. British Journal of Educational Technology 45(5), 793-806. Doi:10.1111/bjet.12089

A peer-reviewed journal article reporting original research focused on K-12.

Borup, J., West, R.E., Graham, C.R. & Davies, R.S. (2014). The adolescent Community of Engagement Framework: a lens for research on K-12 online learning. Journal of Technology and Teacher Education, 22(1), 107-129. Waynesville, NC USA: Society for Information Technology & Teacher Education. Retrieved from https://www.academia.edu/attachments/53684518/download_file?st=MTUyMzU1MzM4Niw2Ny4yMzQuNS4xNjQsNTcxMTQyMTQ%3D&s=work_strip&ct=MTUyMzU1MzM4OCwxNTIzNTUzMzk3LDU3MTE0MjE0

A peer-reviewed journal article reporting original research focused on K-12.

Burdette, P. J., & Greer, D. L. (2014). Online learning and students with disabilities: Parent perspectives. Journal of Interactive Online Learning, 13(2), 67-88. Retrieved February 13, 2016, from http://www.ncolr.org/jiol/issues/pdf/13.2.4.pdf

A peer-reviewed journal article reporting original research focused on K-12.

Cai, Y., Chiew, R., Nazy, Z. T., Indhumathi, C., & Huang, L. (2017). Design and development of VR learning environments for children with ASD. Interactive Learning Environments, 1-12.

A peer-reviewed journal article reporting original research focused on K-12.

Cavanaugh, C., Maor, D., & McCarthy, A. (2014). K-12 mobile learning. In R. E. Ferdig & K. Kennedy (Eds.), Handbook of research on K-12 online and blended learning (pp. 391-414). Retrieved February 13, 2016, from http://press.etc.cmu.edu/fles/Handbook-Blended-Learning_Ferdig-Kennedyetal_web.pdf

A book chapter, so not peer-reviewed and not based on actual research (i.e., it is a commentary of others research).

Chappell, S., Arnold, P., Nunnery, J., & Grant, M. (2015). An examination of an online tutoring program’s impact on low-achieving middle school students’ mathematics achievement. Online Learning, 19(5). Retrieved from http://onlinelearningconsortium.org/read/online-learning-journal/

A peer-reviewed journal article reporting original research focused on K-12.

Chiu, C.-H. (2013). Verification of theory based design features for designing online instruction for students with learning disabilities and other struggling learners. https://kuscholarworks.ku.edu/bitstream/handle/1808/15127/CHIU_ku-0099D_12758_DATA_1.pdf?sequence=1&isAllowed=y: Unpublished dissertation.

A dissertation, so peer-reviewed and research-based.  As to whether it is K-12 focused, well the actual study is to examine “if the literature on multimedia design to identify the theoretical and research support for instructional design principles related to multimedia theories that are viewed as applicable to online instruction… [with a focus on those] applicable to K-12 education” (p. 45).  These principles were then reviewed by three different panels: one of “experts in multimedia theory,” one of “experts in design/development of online instruction,” and one of “experts in design/development of online instruction for students with learning disabilities in K-12 education” (p. 50).  So the research it is based on is not K-12, and only a third of the panelist has any background in K-12 (and a VERY selective background at that).  So this one goes into the somewhat category.

Cho, M. H., & Cho, Y. (2017). Self-regulation in three types of online interaction: a scale development. Distance Education, 38(1), 70-83. 

A peer-reviewed journal article reporting original research.  Unfortunately, if you look at the methodology is states that the sample was “799 students enrolled in online courses at two universities in midwestern USA” (p. 74) – so not K-12.

Chu, H. C., Chen, T. Y., Lin, C. J., Liao, M. J., & Chen, Y. M. (2009). Development of an adaptive learning case recommendation approach for problem-based e-learning on mathematics teaching for students with mild disabilities. Expert Systems with Applications, 36(3), 5456-5468. Doi:dx.doi.org/10.1016/j.eswa.2008.06.140.

A peer-reviewed journal article, but in this case it is not reporting original research.  This is a conceptual article that simply reviews the literature to create a “problem-based e-learning model” (p. 5456).  Now by the looks of it, the model is designed to be used in the K-12 environment, so subsequent research using the model would likely qualify (assuming it was peer reviewed).

Cooze, M., & Barbour, M. (2007). Learning styles: A focus upon e-learning practices and their implications for successful instructional design. Journal of Applied Educational Technology, 4(1), 7-20. Retrieved from http://www.iglean.co.uk/blog/docs/JAET4-1_Cooze.pdf

A peer-reviewed journal article, but it is a literature review that looks at the role of the instructional designer and then explores learning theories and learning styles through that lens.  So again not based on actual research.

Cornelius, S. (2014). Facilitating in a demanding environment: Experiences of teaching in virtual classrooms web conferencing. British Journal of Educational Technology, 45(2), 260-271. Retrieved from http://onlinelibrary.wiley.com/doi/10.1111/bjet.12016/abstract

A peer-reviewed journal article reporting original research, but once again the sample consists of “four experienced teachers…. teaching in four different UK higher education institutions” (pp. 262-263) – so not K-12.

Crippen, K. J., Archambault, L. M., & Kern, C. L. (2012). The nature of laboratory learning experiences in secondary science online. Research in Science Education, 1-22. Doi:10.1007/s11165-012-9301-6

A peer-reviewed journal article reporting original research focused on K-12.

Curtis, H., & Werth, L. (2015). Fostering student success and engagement in a K-12 online school. Journal of Online Learning Research, 1(2), 163-190. Association for the Advancement of Computing in Education (AACE). Retrieved from https://files.eric.ed.gov/fulltext/EJ1148836.pdf

A peer-reviewed journal article reporting original research focused on K-12.

Czerkawski, B. C. (2015). Networked learning: design considerations for online instructors. Interactive Learning Environments, 1-14.

A peer-reviewed journal article reporting original research.  Unfortunately, the second last line of the abstract reads “The aim of this paper is to review and synthesize empirical research on networked learning for online higher education courses and offer suggestions for future studies based on the gaps found in the literature” (p. 1).  So not K-12!

Czerkawski, B. C., & Lyman, E. W. (2016). An instructional design framework for fostering student engagement in online learning environments. Techtrends: Linking Research and Practice to Improve Learning: A Publication of the Association for Educational Communications & Technology, 60(6), 532-539.

Again, a peer-reviewed journal article, but in this case it is not reporting original research.  This is a conceptual article that simply reviews the literature to create an instructional design framework – and not one focused on K-12 (as one of the five sets of keywords was “higher education”). 

So using their own yardstick of peer-reviewed, research-based, and K-12, the authors of this document included 18 items that met their criteria (i.e., green), 16 that did not (i.e., red), and one that is on the fence (i.e., orange).  Basically, less than 51% or – put another way – just over half of the items the authors decided to include actually met their own criteria for inclusion in the first place.  I’m not sure how D-Z does overall (and maybe someday I’ll finish this more systematic examination).  But a quick scan reveals a more than a dozen reports from the Michigan Virtual Learning Research Institute (MVLRI), SRI International, Learning Point Associates, U.S. Department of Education, International Association for K-12 Online Learning, Regional Educational Laboratory Midwest, and the Innosight Institution; none of which are peer-reviewed and only a small portion of which are actually reporting empirical research, as opposed to just reviewing literature (and be sure to note the presence of ideologically-driven organizations in that list).  Plus there are close to two dozen books and book chapters listed, again none of which are peer-reviewed and none of which are based on original research.  There is even at least one blog post listed!

Now let’s assume that it is just a dozen reports and only two dozen book and book chapters, plus the one blog post.  That would be 37 items that don’t meet their criteria.  By my count there were 166 items listed in the document.  If you add the charitable 37 reports, books, book chapters and blog post with the 16 items I identified in my A-B-C review, that would be a total of 53 items that were included by the authors that didn’t meet their own criteria.  Fifty-three items represents 32% of the overall sample.  So at least a third of the sample didn’t even meet the authors’ own criteria!  And I haven’t bothered to review whether the dissertations and journal articles from D-Z are reporting on original research that is K-12 focused.

I should note that I’m not saying that research that is not peer-reviewed is not useful, even for this type of exercise.  Nor am I saying that they aren’t some parallels between what research in higher education tells us about teaching and learning in the online environment.  However, the authors in their methodology decided to use those limiters.  Given what they wanted to accomplish, I agree with excluding non-K-12 stuff.  For me personally, if it was research-based I wouldn’t be too concerned about whether it was peer-reviewed.  In fact, I would prefer an empirical MVLRI report over most dissertations (as the researchers and staff working with MVLRI are familiar with K-12 online and blended learning, whereas most faculty serving on dissertation committees have little background or understanding of K-12 online and blended learning).

So beyond whether the authors actually had a valid sample using their own criteria, let’s take a look at how they did in terms of whether the item referenced had some relationship to the iNACOL standards. To make this an easier task for me, I’m going to use my own scholarship that was included (as I know what that was about, so I don’t have to go looking it up and re-reviewing the contents and focus).  Let’s start with the listing of iNACOL standards.

Standard A – The online teacher knows the primary concepts and structures of effective online instruction and is able to create learning experiences to enable student success.

Standard B – The online teacher understands and is able to use a range of technologies, both existing and emerging, that effectively support student learning and engagement in the online environment.

Standard C – The online teacher plans, designs, and incorporates strategies to encourage active learning, application, interaction, participation, and collaboration in the online environment.

Standard D – The online teacher promotes student success through clear expectations, prompt responses, and regular feedback.

Standard E – The online teacher models, guides, and encourages legal, ethical, and safe behavior related to technology use.

Standard F – The online teacher is cognizant of the diversity of student academic needs and incorporates accommodations into the online environment.

Standard G – The online teacher demonstrates competencies in creating and implementing assessments in online learning environments in ways that ensure validity and reliability of the instruments and procedures.

Standard H – The online teacher develops and delivers assessments, projects, and assignments that meet standards-based learning goals and assesses learning progress by measuring student achievement of the learning goals.

Standard I – The online teacher demonstrates competency in using data from assessments and other data sources to modify content and to guide student learning.

Standard J – The online teacher interacts in a professional effective manner with colleagues, parents, and other members of the community to support students’ success.

Standard K – The online teacher arranges media and content to help students and teachers transfer knowledge most effectively in the online environment.

Now let’s examine all of my scholarship and where the authors believe it has a general relationship to these standards.


Barbour, M. K. (2017). The state of K-12 online learning. In J. G. Cibulka & B. S. Cooper (Eds.), Technology in the Classroom: Can It Improve Teaching and Student Learning in American Schools? (pp. 37-51). Lanham, MD: Rowman and Littlefield Education. Retrieved from https://www.academia.edu/35571422/ Barbour_M._K._2017._The_state_of_K-12_online_learning._In_J._G._Cibulka_and_B._S._Cooper_Eds._Technology_in_the_Classroom_Can_It_Improve_Teaching_and_Student_Learning_in_American_Schools_pp._37-51_._Lanham_MD

  • Standards A, B, C, D, E, F, G, H, I, J, K

This book chapter reviews the language used to describe various forms of K-12 online learning, then examines the literature around how poorly students have performed in online learning environments when comparing apples to apples, complains about the lack of research to guide practice, and then outlines just two areas where research has suggested some guidance (i.e., the presence of well trained local site facilitators and the importance of a variety of forms of interaction).  Can someone please explain to me how this chapter represents research that is supportive of of all 11 iNACOL standards?  Because I just don’t see how the authors of this document could have remotely made this connection?!? 

Barbour, M. K. (2012). Training teachers for a virtual school system: A call to action. In D. Polly, C. Mims, & K. Persichitte (Eds.), Creating technology-rich teach education programs: Key issues (pp. 499-517). Hershey, PA: IGI Global.

  • Standard B

This book chapter reviews the current state of K-12 online learning, discusses the different delivery models that are being used, describes what it is like teaching in a K-12 online learning environment based on the literature, mentions a specific PD initiative, and finishes with detailed descriptions of the TEGIVS, online student teaching at UCF, and existing online teaching endorsements.  Remembering that Standard B reads, “The online teacher understands and is able to use a range of technologies, both existing and emerging, that effectively support student learning and engagement in the online environment.”  You could charitably say that somewhere in the “describes what it is like teaching in a K-12 online learning environment based on the literature” I do touch on the range of technologies, but any serious researcher would not make that case.

Barbour, M. K. (2007). Principles of effective web-based content for secondary school students: Teacher and developer perceptions. The Journal of Distance Education, 21(3), 93-114. Retrieved from http://www.jofde.ca/index.php/jde/article/view/30/11

  • Standard D

This journal article examines a case study I conducted with six course designers from a province-wide virtual school in Newfoundland and Labrador that reported on their perceptions of effective features.  Standard D reads, “the online teacher promotes student success through clear expectations, prompt responses, and regular feedback.”  The seven themes from the research were:

Course developers should:

  1. prior to beginning development of any of the web-based material, plan out the course with ideas for the individual lessons and specific items that they would like to include;
  2. keep the navigation simple and to a minimum, but don’t present the material the same way in every lesson;
  3. provide a summary of the content from the required readings or the synchronous lesson and include examples that are personalized to the students’ own context;
  4. ensure students are given clear instructions and model expectations of the style and level that will be required for student work;
  5. refrain from using too much text and consider the use of visuals to replace or supplement text when applicable;
  6. only use multimedia that will enhances the content and not simply because it is available; and
  7. develop their content for the average or below average student.

So themes 3 and 4 do touch on “promotes student success through clear expectations,” but does anyone else see anything about that speaks to “”promotes student success through… prompt responses and regular feedback”?

Barbour, M., K., Grzebyk, T. W., & Eye, J. (2014). Any time, any place, any pace-really? Examining mobile learning in a virtual school environment. Turkish Online Journal of Distance Education, 15(1). Retrieved February 13, 2016 from http://fles.eric.ed.gov/fulltext/ EJ1042983.pdf

  • Standard B

This journal article examines a case study of 11 students in a supplemental online course that used a mobile app as their LMS for a period of two weeks (although only six students were successful in using the mobile LMS – five of which used the desktop emulator).  The study found that while students could see the potential of the mobile LMS tool, they were generally dissatisfied with this experience for a whole host of reasons.  While this study was an example of an attempt to use a range of tools, it was certainly not an example – nor did it provide any guidance – for an “online teacher [to be] able to use a range of technologies… that effectively support student learning and engagement in the online environment.”

Barbour, M. K., & Mulcahy, D. (2008). How are they doing? Examining student achievement in virtual school. Education in Rural Australia, 18(2), 63-74.

  • Standard H

This journal article examines student performance in online vs. classroom based courses, with the additional variable of urban vs. suburban.  It is a classic media comparison study.  Standard H reads, “the online teacher develops and delivers assessments, projects, and assignments that meet standards-based learning goals and assesses learning progress by measuring student achievement of the learning goals.”  Someone explain to me how a media comparison study has anything to do with this standard?

Barbour, M. K., & Mulcahy, D. (2006). An inquiry into retention and achievement differences in campus based and based courses. Rural Educator, 27(3), 8-12.

  • Standard H

This journal article examines student performance in online vs. classroom based courses, with the additional variable of urban vs. suburban.  It is a classic media comparison study.  Standard H reads, “the online teacher develops and delivers assessments, projects, and assignments that meet standards-based learning goals and assesses learning progress by measuring student achievement of the learning goals.”  Someone explain to me how a media comparison study has anything to do with this standard?

Barbour, M. K., & Plough, C. (2012). Putting the social into online learning: Social networking in a cyber school. International Review of Research in Open and Distance Learning, 13(3), 1-18.

  • Standard C

This journal article examines a case study of how one full-time cyber school used a closed social network to facilitate the co-curricular and extra-curricular functions of a traditional schooling experience.  Standard C reads, “the online teacher plans, designs, and incorporates strategies to encourage active learning, application, interaction, participation, and collaboration in the online environment.”  While the research does not focus on the curricular aspects – so it is not really an online teaching doing any of these things – at least the example is consistent with all of the goals listed in this standard.

Barbour, M., & Plough, C. (2009). Social networking in cyberschooling: Helping to make online learning less isolating. TechTrends, 53(4), 56-60. Doi:10.1007/s11528-009-0307-5. Retrieved from http://edlab.tc.coumbia.edu/files/Barbour2009.pdf

  • Standard C

This journal article examines a case study of how one full-time cyber school used a closed social network to facilitate the co-curricular and extra-curricular functions of a traditional schooling experience.  Standard C reads, “the online teacher plans, designs, and incorporates strategies to encourage active learning, application, interaction, participation, and collaboration in the online environment.”  While the research does not focus on the curricular aspects – so it is not really an online teaching doing any of these things – at least the example is consistent with all of the goals listed in this standard.

Barbour, M. K., & Reeves, T. C. (2013). The reality of virtual schools: A review of the literature. Computers & Education 52(2), 402-416.

  • Standard A

This journal article was a review of the literature that existed up to the end of 2007, early 2008 time period.  The reveiw itself describes the growth of virtual schooling, the different forms it might take and the type of students it attracts, potential benefits of and challenges for virtual schooling, and then recommends a course of research.  Standard A reads, “the online teacher knows the primary concepts and structures of effective online instruction and is able to create learning experiences to enable student success.”  The potential benefits speak of “providing high-quality learning opportunities” and “improving student outcomes and skills,” but don’t speak about how to do either.  Similarly, the challenges raise the issue of “student readiness issues and retention,” but nothing on how to mitigate that.  Essentially, there is no real reference to the components of Standard A in this literature review at all.

Barbour, M. K., Siko, J., Gross, E., & Waddell, K. (2012). Virtually unprepared: Examining the preparation of K-12 online teachers. In R. Hartshorne (Ed.), Teacher education programs and online learning tools: Innovations in teacher preparation (pp. 60-81). Hershey: IGI Global.

  • Standard A

This book chapter explores the literature related to online teaching, describes examples from existing pre-service – and then in-service – teacher education programs where students are exposed to online teaching, describes the existing online teaching endorsements, and then asks if those endorsements are even necessary.  Again, one could charitably argue that by describing existing programs, that I am describing “the primary concepts and structures of effective online instruction and is able to create learning experiences to enable student success,” but that assumes that any of these programs are 1) research-based and 2) effective – and there isn’t any research to indicate either at this stage.

Barbour, M. K., McLaren, A., & Zhang, L. (2012). It’s not that tough: Students speak about their online learning experiences. Turkish Online Journal of Distance Education, 13(2), 226-241.

  • Standard A

This journal article examines an exploratory study of the perceptions of seven students on the benefits and challenges of virtual schooling based on their experience in a supplemental online program.  The results revealed that students enjoyed their online courses, in spite of the significant technology issues they experienced.  Students were able to develop a sense of community with their online classmates, and – in particular – with their local classmates that were taking the same and other online courses at their own school at the same time.  Unfortunately, that community with their local classmates often lead to off-task behaviour – particularly during their scheduled asynchronous slots.  Another reasons for this off-task behaviour was students perceived the asynchronous course content as boring and busy work.   Anyone care to explain to me how any of the findings of this study supports the fact that “the online teacher knows the primary concepts and structures of effective online instruction and is able to create learning experiences to enable student success?”

Cooze, M., & Barbour, M. (2007). Learning styles: A focus upon e-learning practices and their implications for successful instructional design. Journal of Applied Educational Technology, 4(1), 7-20. Retrieved from http://www.iglean.co.uk/blog/docs/JAET4-1_Cooze.pdf

  • Standard K

This journal article reviews the literature related to learning styles, the role of the instructional designer, how learning theories have influenced that role, and then things folks want to consider when designing for K-12 students.  Standard K reads, “the online teacher arranges media and content to help students and teachers transfer knowledge most effectively in the online environment.”  Setting aside the fact that any serious researcher should know that learning styles have no basis in research at all (yes, as a Master’s student and K-12 teacher I had yet to learn that lesson and this article continues to haunt me), this article focuses on the course designer making sure that all content is presented in multiple modalities to make it accessible for students of all learning styles.  Saying something should be designed for auditory, visual, and tactile learners is a far stretch to suggest that it is advising how to “arrange media and content to help students and teachers transfer knowledge most effectively.”

Grant, M., Barbour, M. K. (2013). Mobile teaching and learning in the classroom online: Case studies in K-12. In Z. L. Berge & L. Mullenberg (Eds.) Handbook of mobile learning (pp. 285-292). New York: Routledge.

  • Standard D

This book chapter focuses on the use of mobile technologies in the K-12 environment, specifically by reviewing two projects: 1) the deployment of iPads for classroom-based science teachers, and 2) the use of a mobile LMS in a supplemental online learning class.  As a reminder Standard D reads, “the online teacher promotes student success through clear expectations, prompt responses, and regular feedback.”  Given that the first project had nothing to do with K-12 online and/or blended learning, the second project was the same one reported in Barbour, Grzebyk and Eye (2014) above – where the QM authors incorrectly said it was research to support Standard B, with no mention of Standard D at the time (and for good reason).

Hawkins, A., Barbour, M. K., & Graham, C. R. (2011). Strictly business: Teacher perceptions of interaction in virtual schooling. Journal of Distance Education, 25(2).

  • Standard C

This journal article examines a case study of the perceptions of eight virtual school teachers on the role of interaction in their teaching.  The study found that “the main procedural interactions focused on notifications sent to inactive students. Social interactions were minimal and viewed as having little pedagogical value. Institutional barriers such as class size and an absence of effective tracking mechanisms limited the amount and types of interaction teachers engaged in.”  As a reminder Standard C reads, “the online teacher plans, designs, and incorporates strategies to encourage active learning, application, interaction, participation, and collaboration in the online environment.”  As the study specifically focused on how teachers perceive their interactions, and which ones they believe to be effective, this study does support the standard.

Hawkins, A., Barbour, M. K., & Graham, C. R. (2010). Teacher-student interaction and academic performance at Utah’s Electronic High School. Proceedings of the 26th Annual Conference on Distance Teaching and Learning (pp. 251-255). Madison, WI: The Board of Regents of the University of Wisconsin System.

  • Standard C

This conference proceeding was an odd choice, as it was simply the combined data of these studies:

Hawkins, A., Graham, C., Sudweeks, R., & Barbour, M. K. (2013). Course completion rates and student perceptions of the quality and frequency of interaction in a virtual high school. Distance Education, 34(1), 64-83.

Hawkins, A., Barbour, M. K., & Graham, C. (2012). “Everybody is their own island”: Teacher disconnection in a virtual school. The International Review of Research in Open and Distance Learning, 13(2).  Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/967

Be that as it may, as this proceedings was a part of the same dissertation being described in previous Hawkins, Barbour and Graham (2011) entry, this study does support the standard.

Johnston, S., & Barbour, M. K. (2013). Measuring success: Examining achievement and perceptions of online advanced placement students. American Journal of Distance Education, 27(1), 16-28. Doi:10.1080/08923647/2013.755072.

  • Standard H

This journal article examines student performance in online vs. classroom based courses, with the addition of student perceptions of their online course experience compared with the classroom-based AP experiences.  It is a classic media comparison study, with a bit of student opinion thrown in.  As a reminder Standard H reads, “the online teacher develops and delivers assessments, projects, and assignments that meet standards-based learning goals and assesses learning progress by measuring student achievement of the learning goals.”  Someone explain to me how a media comparison study has anything to do with this standard?


Note that I have left in the colour-coding from above, to allow the reader to understand whether the item is peer-reviewed, research-based, and focused on K-12.  I have used that same colour coding on whether, in my judgement the particular piece of literature actually provided support (or in some cases even had any reference to) the standard that the Quality Matters authors believe it did.

As you can see, there are four standards in green, three in amber, and nine in red of the 16 items from my own scholarship that were included in the report.  This means that in only 25% of the cases did the authors make an accurate attribution of whether the item supported the standard.  If I am being charitable and include all of the amber items too, it would still mean that in only 44% of the cases did the authors make an accurate attribution of whether the item supported the standard – which is still less than half.

So let’s break this down a bit more…

 

Whether the item supports the standard

Green

Amber

Red

Whether the item should have been in the sample Green 4 1 5
Amber      
Red   2 4
  Total 4 3 9

So what does this table tell us:

  • all four items that supported the standard were also items that should have been included in the sample based on the authors’ criteria;
  • none of the items that should have been included in the sample based on the authors’ criteria were correctly attributed as supporting a specific standard;
  • if we limit the analysis to only those items that should have been included in the sample based on the authors’ criteria, 40% of the authors’ attributions of supporting a specific standard was correct; and
  • if we limit the analysis to only those items that should have been included in the sample based on the authors’ criteria, and are being charitable (i.e., including both green and amber), still only 50% of the authors’ attributions of supporting a specific standard was correct.

And this analysis is based on the fact that I’m treating this item as a single item.

Barbour, M. K. (2017). The state of K-12 online learning. In J. G. Cibulka & B. S. Cooper (Eds.), Technology in the Classroom: Can It Improve Teaching and Student Learning in American Schools? (pp. 37-51). Lanham, MD: Rowman and Littlefield Education. Retrieved from https://www.academia.edu/35571422/ Barbour_M._K._2017._The_state_of_K-12_online_learning._In_J._G._Cibulka_and_B._S._Cooper_Eds._Technology_in_the_Classroom_Can_It_Improve_Teaching_and_Student_Learning_in_American_Schools_pp._37-51_._Lanham_MD

  • Standards A, B, C, D, E, F, G, H, I, J, K

If I were treating this as 11 discrete items, suddenly there are four standards in green, three in amber, and 19 in red of the 26 items.  This consideration would mean that in only 15% of the cases did the authors make an accurate attribution of whether the item supported the standard (27% if we were being charitable).

In my set-up of this entry, I wrote:

Its kind of funny because as I scanned this document the first time, I thought to myself that this document is a good example of why K-12 practitioners don’t trust researchers.

Now that I have had a chance to examine it in a more systematic way, I would add that this document is also a good example of why researchers in the field continue to lament the propensity of poor quality research that is being conducted, which only serve to confused those untrusting practitioners.

Anyway, for more on this project, you can review:

September 28, 2018

Truth Or Fiction?

An item from yesterday’s inbox.

Attend the QM Connect Conference

Truth! All Administrators Think Quality is Important.But achieving quality in online and blended learning can be challenging. That’s where QM Connect comes in. At the 10th annual QM Connect Conference you will find countless options to help you address your quality assurance challenges, including two new special sessions!

Each year when I attend the QM Conference, I gain new information (resources) and meet people that help me be successful at my own institution. I find the growth opportunities endless.” — Sherrell Wheeler, Director of Online Quality Assurance at New Mexico State University Alamogordo

Up first on Wednesday — Technology-Enhanced Education: The ROI of Quality. Senior leaders — and those working with them — will engage with colleagues on a variety of topics and learn to employ quality as a differentiator in this series of four sessions.

Then on Thursday, attend the free QM Town Hall Breakfast. During this informal session, share your challenges, concerns and brilliant ideas for ensuring quality for all learners. To attend, select the Town Hall Breakfast during the registration process.

Plus, check out dozens of engaging and informative sessions in ten concentrations to explore key issues, including learner engagementaccessibility and sustaining a quality assurance program.

Time is running out to register. So don’t wait any longer. Make your reservations for QM Connect today — especially your hotel reservation. The deadline for booking the Marriott St. Louis Grand Hotel is October 8.

two people looking at a phone screen

Book your room at the Marriott St. Louis Grand Hotel to get $200 off registration.

Register five or more people now and receive $59.50 off each registration fee!

Get first choice of pre-conference workshops. Each offers tools and information on improving courses and implementing quality assurance at your institution. Workshop space is limited so sign up when you register!

View this email in your browser.
Copyright © 2018 MarylandOnline, Inc. All rights reserved.
Our email marketing is permission-based. If you received a mailing from us, our records indicate that (a) you have expressly shared this address for the purpose of receiving information in the future (“opt-in”), or (b) you have registered or purchased or otherwise have an existing relationship with us. We respect your time and attention by controlling the frequency of our mailings.
QM Quality Matters, Inc. 1997 Annapolis Exchange Parkway Suite 300 Annapolis, MD 21401 USA

August 13, 2018

Stand Out With QM Program Review Certification

From the inbox over the weekend…

checklist with QM Program Review surrounded by hands working on documents and a calculator

Dear Michael,

Thank you for your interest in Quality Matters Program Reviews — an important component of the quality assurance process and another step in delivering on your online promise.
Program Reviews offer many benefits to your institution and to students. Program Reviews can:

  • Aid in the accreditation process
  • Highlight the quality of your online programs to stakeholders
  • Help set your programs apart
  • Improve your programs through the process of qualifying for QM Certification

Chamberlain College of Nursing recently received two Program Certifications after going through the review process and couldn’t agree more about the benefits.

“Program reviews help to ensure that students have access to essential academic resources and services to support their success in the online world. They also show students that continual quality improvement and integration of their feedback is valued at Chamberlain.”
— Anne Marie Hodges, Manager, Web Development

Learn more about the benefits of Program Reviews and the process by reading the full account of Chamberlain’s experience.

If you have plans to have one or more online programs reviewed, or if are you considering Program Candidacy, we are here to help. To start, we recommend participating in the Preparing for Program Reviews workshop — a one hour workshop that covers the demands of the process, the evidence required, and information to help you determine whether you are ready to undergo this process.

You may also complete the Learn More form on the Programs Review page, and we will contact you to answer questions or schedule an information session.

Thank you again for your interest in Program Reviews as well as your commitment to quality assurance in online learning. We look forward to working with you as you move forward in the process.

Sincerely,
QM Program Reviews

Quality Matters QM
Quality Matters (QM) is an international non-profit organization that provides tools and professional development for quality assurance in online and blended learning. When you see the QM Certification Mark, it means that courses have successfully met QM Rubric Standards for Course Design in an official course review.
View this email in your browser

Copyright © 2018 QM Quality Matters, Inc., All rights reserved.
Our email marketing is permission-based. If you received a mailing from us, our records indicate that (a) you have expressly shared this address for the purpose of receiving information in the future (“opt-in”), or (b) you have registered or purchased or otherwise have an existing relationship with us. We respect your time and attention by controlling the frequency of our mailings.

Our mailing address is:

August 9, 2018

Your Webinar Link + 3 Helpful Resources

This information from Quality Matters may be useful to some.

chalk images of lightbulb, figures, documents and gears connected by arrows.

Put Research and the Sixth Edition
Higher Ed Rubric to Work

Thank you for your interest in our recent webinar, “The Role of Research in Developing the Sixth Edition of the QM Higher Education Rubric for Course Design.”

The webinar recording is now ready for you to view! Please note, this link is valid for 30 days. You can also visit the webinar Q&A page to find answers to some of the questions we received during the webinar—as well as all of the links we shared in the chat box.

Want to learn more? Check out the resources below!

Start Using the Sixth Edition to Meet Course Design Standards

Did you know that courses are 42% more likely to meet Standards in the first review when they are pre-reviewed with an internal review process such as a Preparatory Review or Self Review? QM higher education members can access the newest edition of the QM Higher Education Rubric in the CRMS and test it out with a Self-Review. And anyone can enroll in the APPQMR or IYOC to put the research into practice and learn how to use the Standards to improve courses. Not a member yet? Check out the list of Specific Review Standards and see if QM Membershipis right for you!

QM HE Members: Complete the Free Rubric Update

Attend this online, self-paced session to see what’s different about the newest edition of the HE Rubric—and retain your current QM Role. This short session is free for QM Members until December 31, 2018. Register now for the Rubric Update.

Learn the ABCs of QM-Focused Research

Did you know QM Rubrics rely on research from the QM Community? Sign up for this workshopto learn the steps for developing a research study with the goal of improving online education. The next workshop starts November 14, 2018. Members save $100.

Do you have a research interest you’d like to see addressed in a QM research webinar?
Let us know. Send your ideas to QM’s Manager of Research & Development Barbra Burch.

Quality Matters QM
Quality Matters (QM) is an international, US-based non-profit organization specializing in standards, processes and professional development for quality assurance in online and blended learning. QM tools and resources are regularly revised to reflect current research and best practices. When you see QM Certification Marks on courses or programs, it means they have met QM Course Design Standards or QM Program Review Criteria in a rigorous review process.
View email in browser

Copyright © 2018 QM Quality Matters, Inc., All rights reserved.
Our email marketing is permission-based. If you received a mailing from us, our records indicate that (a) you have expressly shared this address for the purpose of receiving information in the future (“opt-in”), or (b) you have registered or purchased or otherwise have an existing relationship with us. We respect your time and attention by controlling the frequency of our mailings.

Our mailing address is:

July 25, 2018

Go Behind The Scenes Of The QM Higher Education Rubric

From yesterday’s inbox…

Explore the Process and Research Used
to Develop the Sixth Edition

funnel with elements of rubric update process represented as ingredients and cover of Rubric Workbook coming out the endQM’s free research webinar series continues with an in-depth look at the research behind the Quality Matters Higher Education Rubric, Sixth Edition. Join QM Staff, QM Research Colleagues and guest experts — Tuesday, July 31, at 1 p.m.Eastern — for an overview of what can only be described as a robust process.

The literature review involved a team of higher education and instructional design researchers inspecting, sorting, and grading over 1200 articles, theses and dissertations from the education, technical, and psychological literature to select the best evidence to support the Rubric Standards and identify new trends.” — Wade Lee, Research Engagement Librarian, University of Toledo

The presentation will include:

  • Why a new Rubric was developed.
  • What changes were made.
  • An overview of the Rubric update process and the people involved.
  • A look at the research involved including the literature review process and explanations of other important data collection and analysis.

Ready to dive into the research? Join us for “The Role of Research in Developing the Sixth Edition of the QM Higher Education Rubric for Course Design” webinar on Tuesday, July 31, at 1 p.m. Eastern. Register for the free research webinar today!

Register Now
Quality Matters QM
Quality Matters (QM) is an international, US-based non-profit organization specializing in standards, processes and professional development for quality assurance in online and blended learning. QM tools and resources are regularly revised to reflect current research and best practices. When you see QM Certification Marks on courses or programs, it means they have met QM Course Design Standards or QM Program Review Criteria in a rigorous review process.
View this email in your browser
Copyright © 2018 QM Quality Matters, Inc., All rights reserved.
Our email marketing is permission-based. If you received a mailing from us, our records indicate that (a) you have expressly shared this address for the purpose of receiving information in the future (“opt-in”), or (b) you have registered or purchased or otherwise have an existing relationship with us. We respect your time and attention by controlling the frequency of our mailings.

Our mailing address is:

Next Page »

Blog at WordPress.com.