Virtual School Meanderings

January 13, 2021

LPI Scholars Recognized for Having High Impact on Policy and Practice in Annual Rankings

This is an interesting item, you may recall that last year when this was released I did a revised K-12 Online Learning Version (see methodology we used here).  Be sure to keep reading below.

View this email in your browser

2021 RHSU Rankings Show LPI Scholars Continue to Have Big Impact on Education Practice and Policy

LPI scholars continue to make significant impact as they work to further access and equity in education. This year’s RHSU Edu-Scholar rankings, released on January 6 by Education Week, featured 28 LPI senior fellows, board members, and authors. This honor is given to 200 out of 20,000 eligible university-affiliated scholars making valuable contributions to educational policy and practice through their work.

Rick Hess, Resident Scholar and Director of Education Policy Studies at the American Enterprise Institute and a blogger at Education Week, compiles the list with the help of a 28-member selection committee. Hess calculates impact based on a wide range of metrics: Google Scholar score, book points, highest Amazon ranking, syllabus points, education press mentions, web mentions, newspaper mentions, Congressional Record mentions, and Followerwonk’s “Social Authority” score (which measures Twitter impact).

The 2021 list features 28 LPI affiliates—including President Linda Darling-Hammond at #3—who have been engaged in LPI research, outreach, and policy development in a variety of ways.

Bruce D. Baker, Rutgers University (#65)
W. Steven Barnett, Rutgers University (#55)
David Berliner, Arizona State University (#50)
Prudence L. Carter, University of California, Berkeley (#170)
Linda Darling-Hammond, Stanford University, Learning Policy Institute (#3)
Patricia Gandara, University of California, Los Angeles (#55)
Howard Gardner, Harvard University (#7)
Gene Glass, Arizona State University (#21)
Kris D. Gutiérrez, University of California, Berkeley (#119)
Richard M. Ingersoll, University of Pennsylvania (#99)
Kirabo (Bo) Jackson, Northwestern University (#149)
Susan Moore Johnson, Harvard University (#22)
Rucker Johnson, University of California, Berkeley (#165)
David Kirp, University of California, Berkeley (#87)
Michael W. Kirst, Stanford University (#106)
Helen (Sunny) F. Ladd, Duke University (#48)
Gloria Ladson-Billings, University of Wisconsin-Madison (#2)
Carol D. Lee, Northwestern University (#114)
Henry M. Levin, Teachers College, Columbia University (#42)
Julie Marsh, University of Southern California (#164)
Jal Mehta, Harvard University (#139)
Pedro A. Noguera, University of California, Los Angeles (#11)
Jeannie Oakes, University of California, Los Angeles (#73)
Gary Orfield, University of California, Los Angeles (#29)
Aaron M. Pallas, Teachers College, Columbia University (#135)
Sean Reardon, Stanford University (#107)
Michael A. Rebell, Teachers College, Columbia University (#169)
Jesse Rothstein, University of California, Berkeley (#27)

Click to tweet: Twenty-eight LPI senior fellows, authors, & board members were featured in the 2021 RHSU Edu-Scholar Public Influence Rankings. We are thankful for their expertise & contributions to equity in education. View the full list via @EducationWeek @rickhess9

Copyright © 2021 Learning Policy Institute, All rights reserved.

The Learning Policy Institute conducts and communicates independent, high-quality research to improve education policy and practice. Working with policymakers, researchers, educators, community groups, and others, the Institute seeks to advance evidence-based policies that support empowering and equitable learning for each and every child. Nonprofit and nonpartisan, the Institute connects policymakers and stakeholders at the local, state, and federal levels with the evidence, ideas, and actions needed to strengthen the education system from preschool through college and career readiness.

Information you provide to us is used exclusively by LPI to communicate our news to you. We never share your information with third parties.

Did someone forward this email to you? Subscribe here.

Our mailing address is:

Learning Policy Institute
1530 Page Mill Rd Ste 250
Palo Alto, California 94304

Even more interesting, if we applied the exact same methodology used by the RHSU folks on the 2020 data that we collected, this would be the results.

April 13, 2020

The Problem With Sensationalized Headlines – #media #fail

So this “news” new came across my electronic desk from several different directions late last week.

4 In 10 U.S. Teens Say They Haven’t Done Online Learning Since Schools Closed


LA Johnson/NPR

With most schools closed nationwide because of the coronavirus pandemic, a national poll of young people ages 13 to 17 suggests distance learning has been far from a universal substitute.

The poll of 849 teenagers, by Common Sense Media, conducted with SurveyMonkey, found that as schools across the country transition to some form of online learning, 41% of teenagers overall, including 47% of public school students, say they haven’t attended a single online or virtual class.

This broad lack of engagement with online learning could be due to many factors. The survey was conducted between March 24 and April 1; some districts may have been on spring break or not have begun regular online classes.

To continue reading, click here.

I’m not even sure where to start here, but maybe I’d recommend to the reader that she review the previous blog entry where I highlighted a recent article by Chuck Hodges and several of his colleagues entitled, “The Difference Between Emergency Remote Teaching and Online Learning.”  Four in 10 students haven’t done emergency remote teaching!  Unless they were engaged in an online learning course prior to four weeks ago, none of them have engaged in online learning or virtual learning or even distance learning at this stage – as the author of the news item uses all three interchangeably!

As I have said before, the reality is that teachers are employing what I saw one educator refer to as pandemic pedagogy.  They are searching for any tools – online or otherwise – to try and provide some form of remote instruction for their students.  Some of these tools are consistent with tools that we would use for K-12 online learning.  However, teachers that use them for online – or blended – learning purposes have made pedagogical and instructional design decisions to use these tools well in advance of their deployment, after having carefully considered the affordances and the limitations of each of the possible tools available to them to accomplish their curriculum goals.

The teachers who find themselves using online and other tools right now are simply grasping at whatever is available that will allow them to continue to help guide the minds of the young people that they have committed to assisting on their journey for knowledge.  They are using tools that are available, that they know how to use, that students were already using, that they can learn relatively quickly, that the school or the district have selected, or any number of reasons – but in most instances sound pedagogy and strong instructional design are not among those reasons.  And that is perfectly fine when engaged in pandemic pedagogy!

In these trying times, when student, parent, and teacher alike are concerned about the future and worried about all of the unknowns, instead of focusing on collecting data about how many students aren’t being served by this mythical online learning that some in the media think will magically appear and that teachers will automatically know how to do and do well, why not focus on collecting data on the variety of ways that teachers are providing remote instruction?  Collect data on the teachers that are both providing remote instruction using online tools for those that do have access, while at the same time created equivalent remote instruction that doesn’t rely upon any online tools – essentially doubling their workload, but doing so to ensure that all of their students are able to engage in at least some semblance of normalcy in these uncertain times!

February 26, 2020

The 2020 RHSU Edu-Scholar Public Influence Rankings – K-12 Online Learning Version (Ranking)

Drum roll please…

Rachel and I unveil the 2020 RHSU Edu-Scholar Public Influence Rankings – K-12 Online Learning Version, ranking the scholars listed in Tables 1 and 2 of the article “K-12 Online Learning Journal Articles: Trends from Two Decades of Scholarship” that was published in the journal Distance Education (see here for an open access version of the article) who did the most last year to shape educational practice and policy.

See this earlier post for an explanation of the selection process and all the methodological details.

Without further ado, here are the 2020 rankings (scroll through the chart to see all names and scores or click the link below the chart to view the table in a new tab).

The 2020 RHSU Edu-Scholar Public Influence Rankings – K-12 Online Learning Version (Rubric)

Last week I was reading through The 2020 RHSU Edu-Scholar Public Influence Rankings, and it got me to thinking about what this might look like for K-12 online learning scholars.  Rick Hess describes this methodology at in an article entitled “The 2020 RHSU Edu-Scholar Public Influence Scoring Rubric.”  So I figured I’d try to give it a go (with the able assistance of Rachel L. Wadham – Education and Juvenile Literature Librarian at the Harold B. Lee Library, Brigham Young University)…

The results of the analysis will be posted in the next entry – which is available at:

The 2020 RHSU Edu-Scholar Public Influence Rankings – K-12 Online Learning Version (Ranking)

I began by using all of the scholars listed in Tables 1 and 2 of the article “K-12 Online Learning Journal Articles: Trends from Two Decades of Scholarship” that was published in the journal Distance Education (see here for an open access version of the article).

Table 1.
Google Scholar and Scopus Ranking of K-12 Online Learning Scholars by Arnesen et al. (2019)

Google Scholar: # of Articles Google Scholar: Points Scholar SCOPUS: # of Articles SCOPUS: Points
57 149 Michael Barbour 11 23
19 47 Cathy Cavanaugh 6 13
18 49 Ken Stevens 4 12
16 44 Elizabeth Murphy 8 24
15 26 Charles Graham 7 14
14 38 Margaret Roblyer 3 8
14 35 Jered Borup 6 18
12 33 Leanna Archambault 6 16
11 26 Diana Greer 3 6
10 24 Dennis Beck 4 9
10 24 Niki Davis 5 11
9 22 Kathryn Kennedy 4 10
8 22 Kevin Oliver 2 6
8 19 Dennis Mulcahy 0 0
8 16 Maria Rodriguez-Manzanares 6 12
8 14 Richard Ferdig 5 10
7 21 Glenn Russell 1 3
7 14 Sean Smith 3 7
7 12 Erik Black 4 7
5 10 Meredith DiPietro 3 7
5 5 Randall Davies 3 3

The analysis began with the overall rankings in the two tables in Arnesen et al.  Then I went through and attempted to apply the categories listed in Hess’ rubric in the manner described below.

In describing the 2020 RHSU Edu-Scholar Public Influence Scoring Rubric, Hess (2020b) indicated that each scholar was scored in nine categories with a potential maximum score of 200.  We attempted to apply each category, as appropriate, to the list of scholars developed from Arnesen et al. (2019) – with the exception that we did not cap points for any of the categories.

Google Scholar Score: This figure gauges the number of articles, books, or papers a scholar has authored that are widely cited. A useful, popular way to measure the breadth and impact of a scholar’s work is to tally works in descending order of how often each is cited and then identify the point at which the number of oft-cited works exceeds the cite count for the least-frequently cited. (This is known in the field as a scholar’s “h-index.”) …the measure recognizes that bodies of scholarship matter greatly for influencing how important questions are understood and discussed. The search was conducted using the advanced search “author” filter in Google Scholar. For those scholars who have created a Google Scholar account, their h-index was available at a glance. For those scholars without a Google Scholar account, a hand search was used to calculate their score and cull out works by other, similarly named, individuals. While Google Scholar is less precise than more specialized citation databases, it has the virtue of being multidisciplinary and publicly accessible. (¶ 6)

We applied this category to the K-12 distance/online learning scholars in two ways – both of which were consistent with Hess’ description.  First, 12 of the 21 scholars had established profiles on Google Scholar. Second, for the remaining nine scholars the researchers used the software Publish or Perish to determine the h-index between 17-19 January 2020.

Hess (2020b) described the second Edu-Scholar Public Influence Scoring Rubric category as:

Book Points: A search on Amazon tallied the number of books a scholar has authored, co-authored, or edited. Scholars received 2 points for a single-authored book, 1 point for a co-authored book in which they were the lead author, a half-point for co-authored books in which they were not the lead author, and a half-point for any edited volume. The search was conducted using an “Advanced Books Search” for the scholar’s first and last name. (On a few occasions, a middle initial or name was used to avoid duplication with authors who had the same name, e.g., “David Cohen” became “David K. Cohen.”) The search only encompassed “Printed Books” (one of several searchable formats) so as to avoid double-counting books available in other formats. This means that books released only as e-books are omitted. To date, however, few scholars on this list pen books that are published solely as e-books. “Out of print” volumes were excluded, as were reports, commissioned studies, and special editions of magazines or journals. This measure reflects the conviction that the visibility, packaging, and permanence of books allows them to play an outsized role in influencing policy and practice. (¶ 7)

This category was applied as described by Hess.

The next category was Highest Amazon Ranking, which Hess described as reflecting “the scholar’s highest-ranked book on Amazon. The highest-ranked book was subtracted from 400,000 and the result was divided by 20,000 to yield a maximum score of 20” (¶ 8).  This category was excluded because of the lack of books identified in the previous category, and the fact that the only scholar who had any number of books was Rick Ferdig – but only two of those books were actually focused on K-12 online learning.

Hess (2020b) described the fourth Edu-Scholar Public Influence Scoring Rubric category as:

Syllabus Points: This seeks to measure a scholar’s long-term academic impact on what is being read by the rising generation of university students. This metric was scored using, the most comprehensive database of syllabi in existence. It houses over 6 million syllabi from across American, British, Canadian, and Australian universities. A search of the database was used to identify each scholar’s top-ranked text. (¶ 9)

This category was applied as described.  Essentially, the score was the total number identified in the database when I searched both the scholar’s first and surname AND the scholar’s name with their initials.

Hess’ fifth category was Education Press Mentions, which measured the total number of times the scholar was quoted or mentioned in Education Week, the Chronicle of Higher Education, or Inside Higher Education during 2019. Unfortunately, searches of the first third of the Arnesen et al. (2019) scholars yielded zero results.  As such, it was decided that any education press mentions would be included in the Newspaper Mentions category.

Hess (2020b) described the sixth Edu-Scholar Public Influence Scoring Rubric category as:

Web Mentions: This reflects the number of times a scholar was referenced, quoted, or otherwise mentioned online in 2019. The intent is to use a “wisdom of crowds” metric to gauge a scholar’s influence on the public discourse last year. The search was conducted using Google. The search terms were each scholar’s name and university affiliation (e.g., “Bill Smith” and “Rutgers University”). Using affiliation served a dual purpose: It avoids confusion due to common names and increases the likelihood that mentions are related to university-affiliated activity. Variations of a scholar’s name (such as common diminutives and middle initials) were included in the results, if applicable. (¶ 11)

This category was applied as described by Hess.  The search was also done with the scholar’s name, with no initial, and their most recent organization.

Hess described the seventh category of the Edu-Scholar Public Influence Scoring Rubric as:

Newspaper Mentions: A Lexis Nexis search was used to determine the number of times a scholar was quoted or mentioned in U.S. newspapers. Again, searches used a scholar’s name and affiliation; diminutives and middle initials, if applicable, were included in the results. To avoid double counting, the scores do not include any mentions from Education Week, the Chronicle of Higher Education, or Inside Higher Ed. (¶ 12)

A search of Newspaper’s by ProQuest was used to determine the number of times a scholar was quoted or mentioned in newspapers.  Newspaper’s by ProQuest indexes over 700 news sources from the United States, Canada, Europe, Africa, Asia, Latin America, and Australia.  ProQuest was used instead of Lexis Nexis because of limited access through the researchers institutions. The use of ProQuest also represented a more pure representation of newspapers so using this source did not require the removal any mentions of the education news sources that Hess was required to eliminate.  The search was conducted during a two week period of February 2020.. Searches used the scholar’s first and last name in both direct and reverse format (i.e. “first name last name” or “last name, first name”). Since newspaper citations will almost always use full name references at least once it was deemed unnecessary to use diminutives and middle initials.  Results where then arrowed the by date to return only mentions published Jan 1, 2019-Dec 31, 2019. For those unique names that returned less than 100 results, each result was viewed and applicable ones were noted. For example obituaries, editorials, or other items that were clearly not academic citations or names that were connected to other disciplines (for example botany, sports or film) were disregarded.  For those names that returned over 100 results within the narrowed date frame further narrowing was required. To accomplish this key terms that indicated the discipline of the researchers was added (i.e. (education or teach* or learn* or design or distance or online or blended or instruction or student*)) In every instance this addition returned less than 100 results and each result was viewed and noted if it was applicable.

Hess’ eighth category was the Congressional Record Mentions: , which was described as “a simple name search in the Congressional Record for 2019 determined whether a scholar was referenced by a member of Congress” (¶ 13).  An initial search of the Congressional Record for the first third of the Arnesen et al. (2019) scholars yielded zero results, as such this category was excluded from the analysis.

Finally, Hess (2020b) described the ninth Edu-Scholar Public Influence Scoring Rubric category as:

Twitter Score: Since Kred and Klout no longer score individual twitter accounts, this year Followerwonk’s “Social Authority” score was used. Followerwonk scores each Twitter account on a scale of 0-100 based on the retweet rate of the user’s few hundred most recent tweets, with an emphasis placed on more recent tweets while accounting for other user-specific variables (such as follower count). While I’m highly ambivalent about the role played by social media, it’s indisputable that many public scholars exert significant influence via their social-media activity—and the lion’s share of this activity plays out on Twitter. (¶ 14)

This category was applied as described above.

January 19, 2012

Uncritical Media Acceptance Without Question

Over the past few months, I’ve used this blog to be more critical about the neo-liberal and neo-conservative proponents of K-12 online learning.  However, one of the groups that I haven’t taken to task – that quite deserve it – are the media.  In the United States, it seems that the media have taken to reporting what they are told without questioning the information – and this has been particularly true when it comes to the discussions of educational reform.  Take this Education Week article as an example:

Digital Learning Addresses Challenges in K-12 Ed.

By Katie Ash on January 4, 2012 5:50 PM

The Alliance for Excellent Education recently released a new brief called “The Digital Learning Imperative: How Technology and Teaching Meet Today’s Educational Challenges,” as a follow up to similar report issued nearly two years ago.

The group and its founder, former gov. Bob Wise, are also partners with former gov. Jeb Bush and his Foundation for Excellence in education, with the two teaming up to launch the Digital Learning Now initiative in the fall of 2010, an initiative aimed at getting states to pass policies more friendly to digital learning, but that some critics have alleged has ulterior commercial motives.

The update notes the latest developments in the ed-tech arena while acknowledging that challenges raised in the original report still affect today’s teachers and learners. The report breaks those challenges into three categories:

  1. Students are not leaving high school prepared for a global economy and fast-paced workplace. Only 72 percent of high schoolers graduate and many go into college needing remediation, the report says.
  2. Schools are facing budget shortfalls and do not anticipate having new major funding sources in the near future. In fiscal year 2012, 42 states will be facing $103 billion in budget gaps, the report says.
  3. Not all students have access to high-quality teachers, teaching strategies, and learning experiences. Today’s teachers, on average, have only 1-2 years of experience, down from an average of 15 years of experience in 1987. And many schools do not have access to teachers in specific subject areas, such as physics or chemistry, especially in rural areas, says the report.

Technology can help address these gaps and challenges, the report says, through real-time data and assessment feedback, a variety of online and digital content, and increased communication with teachers, parents, and students, teachers.

“As schools and districts explore the many opportunities that digital learning affords teachers and students, especially with today’s global economy and demands for innovation, they see the potential for meeting the needs of increasingly diverse students more effectively,” it reads.

And with technology enabling a greater variety of course offerings, students can take courses they find more compelling—and avoid the boredom many dropouts say is a reason they fail to graduate.

Categories: Disruptive Technology, Research/Reports 

Let’s look at some of the claims, shall we…

Students are not leaving high school prepared for a global economy and fast-paced workplace.

Really?  I guess this means that in industries that rely heavily upon younger employees – like technology for example – are simply failing now that we have all of these unprepared workers entering into the workplace.  Or maybe it means that only industries in the United States, as that seems to be where this education problem exist, are failing due to the influx of these unprepared younger workers?  Moving on…

Technology can help address these gaps and challenges.

Really?  Guess the Digital Learning Now folks (along with the article’s author) missed that article published in the Review of Educational Research last year entitled “What Forty Years of Research Says About the Impact of Technology on Learning: A Second-Order Meta-Analysis and Validation Study,” which basically says that:

“it is arguable that it is aspects of the goals of instruction, pedagogy, teacher effectiveness, subject matter, age level, fidelity of technology implementation, and possibly other factors that may represent more powerful influences on effect sizes than the nature of the technology intervention.”

But why let the facts get in the way of a good ideologically-motivated argument (and the media repeating without question the same lie).  Again, moving on…

And with technology enabling a greater variety of course offerings, students can take courses they find more compelling—and avoid the boredom many dropouts say is a reason they fail to graduate.

Really?  So the reason students aren’t graduating is because they are bored?  And offering courses using technology will automatically cure that boredom?  Or is it providing a wider range of courses that will fix the boredom and, thus solve the graduation problems in the United States?  I don’t even know where to begin with this simplistic, naive statement.

This is just three examples, I could have pulled out more unqualified, unquestioned statements that the author of this article simply accepted as fact!

And it is this kind of simplicity and naivety – and a simple unquestioning acceptance – that is the problem!!!  The basic premise of this article – and of the Digital Learning Now report is that by adopting online learning – really any kind of online learning that has “real-time data and assessment feedback” – it will automatically “increased communication with teachers, parents, and students, teachers” and also “address these gaps and challenges!”  Basically, digital learning – as long as it has “real-time data and assessment feedback” – is the silver bullet we’ve all been looking for.  Not only does some non-partisan, unbiased, independent organization say so in a fancy, glossy report; but the news says so too.

It is kind of funny that it was only this past week that I learned that it is illegal in Canada for broadcaster to lie in a newscast.  I wonder if laziness and a lack of due diligence would be considering lying under this Canadian law?  It’d also be interesting to speculate what might happen to the nature of the media (and the political discourse) if the same law was applied in the United States…

Next Page »

Blog at