Virtual School Meanderings

February 26, 2020

The 2020 RHSU Edu-Scholar Public Influence Rankings – K-12 Online Learning Version (Rubric)

Last week I was reading through The 2020 RHSU Edu-Scholar Public Influence Rankings, and it got me to thinking about what this might look like for K-12 online learning scholars.  Rick Hess describes this methodology at in an article entitled “The 2020 RHSU Edu-Scholar Public Influence Scoring Rubric.”  So I figured I’d try to give it a go (with the able assistance of Rachel L. Wadham – Education and Juvenile Literature Librarian at the Harold B. Lee Library, Brigham Young University)…

The results of the analysis will be posted in the next entry – which is available at:

The 2020 RHSU Edu-Scholar Public Influence Rankings – K-12 Online Learning Version (Ranking)

I began by using all of the scholars listed in Tables 1 and 2 of the article “K-12 Online Learning Journal Articles: Trends from Two Decades of Scholarship” that was published in the journal Distance Education (see here for an open access version of the article).

Table 1.
Google Scholar and Scopus Ranking of K-12 Online Learning Scholars by Arnesen et al. (2019)

Google Scholar: # of Articles Google Scholar: Points Scholar SCOPUS: # of Articles SCOPUS: Points
57 149 Michael Barbour 11 23
19 47 Cathy Cavanaugh 6 13
18 49 Ken Stevens 4 12
16 44 Elizabeth Murphy 8 24
15 26 Charles Graham 7 14
14 38 Margaret Roblyer 3 8
14 35 Jered Borup 6 18
12 33 Leanna Archambault 6 16
11 26 Diana Greer 3 6
10 24 Dennis Beck 4 9
10 24 Niki Davis 5 11
9 22 Kathryn Kennedy 4 10
8 22 Kevin Oliver 2 6
8 19 Dennis Mulcahy 0 0
8 16 Maria Rodriguez-Manzanares 6 12
8 14 Richard Ferdig 5 10
7 21 Glenn Russell 1 3
7 14 Sean Smith 3 7
7 12 Erik Black 4 7
5 10 Meredith DiPietro 3 7
5 5 Randall Davies 3 3

The analysis began with the overall rankings in the two tables in Arnesen et al.  Then I went through and attempted to apply the categories listed in Hess’ rubric in the manner described below.

In describing the 2020 RHSU Edu-Scholar Public Influence Scoring Rubric, Hess (2020b) indicated that each scholar was scored in nine categories with a potential maximum score of 200.  We attempted to apply each category, as appropriate, to the list of scholars developed from Arnesen et al. (2019) – with the exception that we did not cap points for any of the categories.

Google Scholar Score: This figure gauges the number of articles, books, or papers a scholar has authored that are widely cited. A useful, popular way to measure the breadth and impact of a scholar’s work is to tally works in descending order of how often each is cited and then identify the point at which the number of oft-cited works exceeds the cite count for the least-frequently cited. (This is known in the field as a scholar’s “h-index.”) …the measure recognizes that bodies of scholarship matter greatly for influencing how important questions are understood and discussed. The search was conducted using the advanced search “author” filter in Google Scholar. For those scholars who have created a Google Scholar account, their h-index was available at a glance. For those scholars without a Google Scholar account, a hand search was used to calculate their score and cull out works by other, similarly named, individuals. While Google Scholar is less precise than more specialized citation databases, it has the virtue of being multidisciplinary and publicly accessible. (¶ 6)

We applied this category to the K-12 distance/online learning scholars in two ways – both of which were consistent with Hess’ description.  First, 12 of the 21 scholars had established profiles on Google Scholar. Second, for the remaining nine scholars the researchers used the software Publish or Perish to determine the h-index between 17-19 January 2020.

Hess (2020b) described the second Edu-Scholar Public Influence Scoring Rubric category as:

Book Points: A search on Amazon tallied the number of books a scholar has authored, co-authored, or edited. Scholars received 2 points for a single-authored book, 1 point for a co-authored book in which they were the lead author, a half-point for co-authored books in which they were not the lead author, and a half-point for any edited volume. The search was conducted using an “Advanced Books Search” for the scholar’s first and last name. (On a few occasions, a middle initial or name was used to avoid duplication with authors who had the same name, e.g., “David Cohen” became “David K. Cohen.”) The search only encompassed “Printed Books” (one of several searchable formats) so as to avoid double-counting books available in other formats. This means that books released only as e-books are omitted. To date, however, few scholars on this list pen books that are published solely as e-books. “Out of print” volumes were excluded, as were reports, commissioned studies, and special editions of magazines or journals. This measure reflects the conviction that the visibility, packaging, and permanence of books allows them to play an outsized role in influencing policy and practice. (¶ 7)

This category was applied as described by Hess.

The next category was Highest Amazon Ranking, which Hess described as reflecting “the scholar’s highest-ranked book on Amazon. The highest-ranked book was subtracted from 400,000 and the result was divided by 20,000 to yield a maximum score of 20” (¶ 8).  This category was excluded because of the lack of books identified in the previous category, and the fact that the only scholar who had any number of books was Rick Ferdig – but only two of those books were actually focused on K-12 online learning.

Hess (2020b) described the fourth Edu-Scholar Public Influence Scoring Rubric category as:

Syllabus Points: This seeks to measure a scholar’s long-term academic impact on what is being read by the rising generation of university students. This metric was scored using OpenSyllabusProject.org, the most comprehensive database of syllabi in existence. It houses over 6 million syllabi from across American, British, Canadian, and Australian universities. A search of the database was used to identify each scholar’s top-ranked text. (¶ 9)

This category was applied as described.  Essentially, the score was the total number identified in the database when I searched both the scholar’s first and surname AND the scholar’s name with their initials.

Hess’ fifth category was Education Press Mentions, which measured the total number of times the scholar was quoted or mentioned in Education Week, the Chronicle of Higher Education, or Inside Higher Education during 2019. Unfortunately, searches of the first third of the Arnesen et al. (2019) scholars yielded zero results.  As such, it was decided that any education press mentions would be included in the Newspaper Mentions category.

Hess (2020b) described the sixth Edu-Scholar Public Influence Scoring Rubric category as:

Web Mentions: This reflects the number of times a scholar was referenced, quoted, or otherwise mentioned online in 2019. The intent is to use a “wisdom of crowds” metric to gauge a scholar’s influence on the public discourse last year. The search was conducted using Google. The search terms were each scholar’s name and university affiliation (e.g., “Bill Smith” and “Rutgers University”). Using affiliation served a dual purpose: It avoids confusion due to common names and increases the likelihood that mentions are related to university-affiliated activity. Variations of a scholar’s name (such as common diminutives and middle initials) were included in the results, if applicable. (¶ 11)

This category was applied as described by Hess.  The search was also done with the scholar’s name, with no initial, and their most recent organization.

Hess described the seventh category of the Edu-Scholar Public Influence Scoring Rubric as:

Newspaper Mentions: A Lexis Nexis search was used to determine the number of times a scholar was quoted or mentioned in U.S. newspapers. Again, searches used a scholar’s name and affiliation; diminutives and middle initials, if applicable, were included in the results. To avoid double counting, the scores do not include any mentions from Education Week, the Chronicle of Higher Education, or Inside Higher Ed. (¶ 12)

A search of Newspaper’s by ProQuest was used to determine the number of times a scholar was quoted or mentioned in newspapers.  Newspaper’s by ProQuest indexes over 700 news sources from the United States, Canada, Europe, Africa, Asia, Latin America, and Australia.  ProQuest was used instead of Lexis Nexis because of limited access through the researchers institutions. The use of ProQuest also represented a more pure representation of newspapers so using this source did not require the removal any mentions of the education news sources that Hess was required to eliminate.  The search was conducted during a two week period of February 2020.. Searches used the scholar’s first and last name in both direct and reverse format (i.e. “first name last name” or “last name, first name”). Since newspaper citations will almost always use full name references at least once it was deemed unnecessary to use diminutives and middle initials.  Results where then arrowed the by date to return only mentions published Jan 1, 2019-Dec 31, 2019. For those unique names that returned less than 100 results, each result was viewed and applicable ones were noted. For example obituaries, editorials, or other items that were clearly not academic citations or names that were connected to other disciplines (for example botany, sports or film) were disregarded.  For those names that returned over 100 results within the narrowed date frame further narrowing was required. To accomplish this key terms that indicated the discipline of the researchers was added (i.e. (education or teach* or learn* or design or distance or online or blended or instruction or student*)) In every instance this addition returned less than 100 results and each result was viewed and noted if it was applicable.

Hess’ eighth category was the Congressional Record Mentions: , which was described as “a simple name search in the Congressional Record for 2019 determined whether a scholar was referenced by a member of Congress” (¶ 13).  An initial search of the Congressional Record for the first third of the Arnesen et al. (2019) scholars yielded zero results, as such this category was excluded from the analysis.

Finally, Hess (2020b) described the ninth Edu-Scholar Public Influence Scoring Rubric category as:

Twitter Score: Since Kred and Klout no longer score individual twitter accounts, this year Followerwonk’s “Social Authority” score was used. Followerwonk scores each Twitter account on a scale of 0-100 based on the retweet rate of the user’s few hundred most recent tweets, with an emphasis placed on more recent tweets while accounting for other user-specific variables (such as follower count). While I’m highly ambivalent about the role played by social media, it’s indisputable that many public scholars exert significant influence via their social-media activity—and the lion’s share of this activity plays out on Twitter. (¶ 14)

This category was applied as described above.

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

%d bloggers like this: