Digest No. 04 - June 2018

Rankings Reconsidered: Placing Student Engagement at Risk

Zilvinskis, J., and L. Rocconi. 2018. “Revisiting the Relationship between Institutional Rank and Student Engagement.” The Review of Higher Education 41 (2): 253–280.


In a study that examined the relationship between institutional rankings and the National Survey of Student Engagement’s (NSSE) assessments of student-faculty engagement, John Zilvinskis and Louis Rocconi found either no relationship or a modest, negative association. The study examined indicators used by U.S. News & World Report, Forbes, and Washington Monthly, which are widely used by the public. Results indicated that the higher the ranking, the fewer engagements between faculty members and students.

The challenges associated with this line of inquiry involve the validity of institutional rankings, especially popular and often-revered ones, as a means of understanding the student engagement experience. This validity problem is explicitly mentioned by the authors as one reason for the study. To address the issue, the authors of the study used “research in behavioral industrial organization” (p. 257) and Hossler and Gallagher’s (1987) three-phase model of college choice. From across these frameworks, the authors suggest that third-party entities create and use ranking systems as a means of lowering the costs associated with choosing a college by providing efficient information to consumers (e.g., families) and influencing institutional practice ranging from mission articulation to admissions and faculty compensation. (The authors cite the work of Gonzales [2013], Melguizo and Strober [2007], and Meredith [2004] in making these claims.) Also, they review literature concerning family use of rankings related to institutional choice: In short, families with access to more social and navigational capital are more likely to use rankings in evaluating and selecting institutions than families without access to these forms of capital. Taken together, it is clear that third parties––those who design and message rankings and those who use them to evaluate institutions––often drive educational practice, including practices related to student engagement.

In terms of study design, the authors draw from two data sources: (1) over 80,000 first-year and senior students enrolled at one of 64 institutions that participated in NSSE’s 2013 administration and (2) an institution’s 2013 score across three ranking platforms, including Top Colleges in the U.S. (Forbes), U.S. News & World Report National University Rankings, and Washington Monthly’s National University Rankings. Important to note––and carefully acknowledged by the study’s authors––are these sources’ limitations, including but not limited to issues of self-reporting, social desirability, and institutional selection into NSSE. Leaders of CIC institutions should interpret the results cautiously.


Of the ten engagement items tested for their associations with rankings, only one shared a significant relationship with all three ranking platforms: student-faculty interaction. Two design elements should be kept in mind, one involving the conceptual structure of student-faculty interaction, and the other regarding study design. Student-faculty interaction was measured on a frequency scale that asked students to respond to the following set of four items: How often have students (1) talked about career plans with a faculty member; (2) worked with a faculty member on activities other than coursework (committees, student groups, etc.); (3) discussed course topics, ideas, or concepts with a faculty member outside of class; and (4) discussed academic performance with a faculty member. In essence this was a measure of frequency of contact with faculty members, not a measure of relationship quality. Although other minor findings were reported, this result was the only consistent pattern that held across the three ranking platforms.

The second element—one of particular importance to the CIC membership—is that the study controlled for institutional size and control (private vs. public). These design features suggest that the relationship between ranking and frequency of student-faculty engagement consistently held across these differences. For example, faculty members from less-highly ranked, smaller, private institutions were more likely to engage with students than those from highly ranked, smaller, private institutions.

The authors provide a series of possibilities for these findings. The first is that highly-ranked institutions may attract students who need less interaction with faculty members. The second is that highly-ranked institutions recruit faculty members who place less of an emphasis on spending time with students. The third involves the institutional ranking process itself––is it designed to measure what really matters to students as they pursue their college degree?


While the authors, in the spirit of scholarly inquiry, offered a possible explanation of their findings that students with less need to engage with faculty members are attracted to more highly-ranked institutions, this may not be true at CIC member institutions. Students who are attracted to CIC institutions may simply have different needs than those attracted to other types of institutions (e.g., research universities). The key questions may instead be: What is a faculty member’s obligation to students who ask questions about career choice, non-course-related topics, course-related topics outside of class, and their academic performance? How do faculty members at highly-ranked institutions regard student engagement as part of their job?

For presidents and other senior administrators of highly-ranked CIC institutions, the take-home message involves faculty work as related to institutional ranking: How do administrators frame the essence and importance of faculty work in light of desires to improve institutional rankings that families use to evaluate and select institutions? How might focusing on institutional rankings compromise the frequency of the faculty-student engagement experience? Should productivity metrics for faculty members include frequency—and even quality—of engagement with students?

For leaders of less highly ranked CIC institutions, the question is one of branding. How can leaders use these findings as a tool to promote the benefits of attending a smaller, less highly-ranked private institution that may feature closer and more frequent student-faculty engagement?

About the Authors

John Zilvinksis is assistant professor of student affairs administration at Binghamton University.

Louis Rocconi is assistant professor of evaluation, statistics, and measurement at the University of Tennessee.

Literature Readers May Wish to Consult

Gonzales, L. D. 2013. “Faculty Sensemaking and Mission Creep: Interrogating Institutionalized Ways of Knowing and Doing Legitimacy.” The Review of Higher Education 36(2): 179–209.

Hossler, D., and K. S. Gallagher. 1987. “Studying College Choice: A Three-Phase Model and the Implications for Policy-Makers.” College and University 2: 207–221.

Melguizo, T., and M. H. Strober. 2007. “Faculty Salaries and the Maximization of Prestige.” Research in Higher Education 48(6): 633–668.

Meredith, M. 2004. “Why Do Universities Compete in the Ratings Game? An Empirical Analysis of the Effects of the U.S. News and World Report College Rankings.” Research in Higher Education 45(5): 443–461.