It is often said that faculty in higher education are loathe to embrace change. Critics frequently point to our medieval regalia as evidence of faculty members’ recalcitrant ways, but evidence for this tendency is also found in our longstanding attachment to quaint antiquities like “chalk,” “learning for learning’s sake,” and “books.” As a once and future faculty member, I agree that faculty are, by and large, change averse, but I would point out that this is often a more reasonable response than the perpetual swiveling required to keep up with every administrator’s pet project or crisis du jour. Moreover, faculty members in many disciplines are trained and rewarded to take the long view: we traffic in longitudinal data, historical arcs, geological and cosmological time. Regardless of the motivation, it is no secret that “change management” in higher education is a formidable task indeed, particularly when one is trying to make change in that most fundamental of units: the academic department.
Attempting to encourage change in academic departments is difficult in part because, as administrators, we have so few carrots with which to motivate the process, and those that exist are becoming increasingly scarce. The plum prize, a tenure track line for the department, is such a rarity at many institutions that if one waited for this opportunity to encourage change, departments might need to pay attention only once per decade. Lesser prizes – enhanced travel money or departmental discretionary funds – are also difficult to muster up in a time of tight budgets, and are only marginally successful. One can influence department culture through revisions in tenure and promotion guidelines, but these typically impact single individuals at a time—and are virtually meaningless if the department has no one on the tenure track. Finally, sticks are as hard to come by as carrots. The kind of penalty associated with a department becoming stale – students voting with their feet, resulting in declining enrollments – can take a long time to manifest. Even when it does, the formerly popular department will be more likely to lament “kids these days” and “poor marketing” than to rethink a decades-old curriculum.
In this virtually incentive-less context, what is a dean to do? The answer is to use an existing process to help drive change: Academic Program Review, or APR. There are two types of change for which APR can be particularly effective: program modification (i.e., overhauling a department’s curricular offerings and/or pedagogical approach) and program restructuring, including program elimination. Not coincidentally, these are often the most difficult areas for administrators to influence, for myriad reasons. However, both types are increasingly common and necessary as universities face enrollment and budget fluctuations and increased scrutiny from accrediting bodies.
How great a change one can make depends on if one is conducting a good or a transformative APR. When done well, program review can encourage departments and programs to be their own change agents, and make better and more lasting changes than those driven from above.
Conducting a Good APR
Academic Program Review is the periodic, comprehensive evaluation of an academic department or program in a way that demonstrates continual quality assessment and improvement. APR typically evaluates program quality, student learning outcomes, faculty expertise and productivity, and contribution of the department to the University and the community. As such, APR is perhaps the least attractive change mechanism one could possibly endorse. It is, after all, grounded in the decidedly laborious process of accreditation, a creature of educational bureaucracy and something that is loathed by faculty members perhaps even more than change itself. As a political scientist, though, I understand the power of and in bureaucracy. Bureaucracy, for those in the know, is where the action happens. While high-profile politicians are busy bloviating and cutting ribbons, it is the bureaucrats who get the job done. It is the same with APR. While the next new conference or webcast may promise change beyond our wildest administrative dreams (which, admittedly, are not very wild), it is academic program review in which we can actually accomplish the tasks at hand. It is a solid and dependable work horse when so much else targeted for administrators is a shiny show pony. And, like any good bureaucratic implementation, APR encourages meaningful change over time, so that what once was far-fetched (accessible buildings, smoke-free bars and restaurants) becomes the norm, part of our everyday educational landscape.
My understanding of a “good” APR is one that is helpful, insightful, and effective in prompting the department to consider change. In some cases, conducting a “good” APR can be sufficient for moving a department forward, particularly if it is intrinsically motivated to do better work and/or eager to respond to students’ needs. I would argue that, based on its importance within accreditation processes, most institutions these days routinely conduct “good” APRs. In the event that your institution is not conducting good APRs, I offer the following as necessary elements for it to be helpful and insightful:
- A clear, consistent, transparent process: the APR process is standard across the college or university and is independent of other types of program accreditation (e.g., NCATE, ABET, ACS, or AACSB).
- A regular cycle: All departments and programs are placed on a staggered cycle for review (typically 5 to 7-year cycles, per the accrediting agency), so that some percentage of programs is up for APR each year.
- A budget: APR requires institutional investment. At a minimum, this includes travel expenses and stipends for reviewers, but may also include reassigned time or stipends for faculty writing the self-study; a commitment to institutional research and data collection; and staff time in managing the logistics of a visit.
- Good data: Self-studies are only as good as the data they use. Institutions that conduct good APRs provide their faculty with data packages to use in their reports. This should be data that speaks directly to the program’s effectiveness and efficiency, and may include enrollment, retention, student surveys or focus groups, alumni outcomes, and/or contribution margin data.
- Genuine administrator interest: If you see APR as just another box to check, your faculty will, too.
- An internal reviewer: A colleague from the home campus to provide institutional context for the external reviewers.
- Clear communication with reviewers: The reviewers should know what your expectations are for the review and for their report.
- Mechanisms in the cycle for effectively closing the loop: What happens when the reviewers’ visit is concluded? Good APRs produce results that are affirmed by administrators, where both parties are held accountable for next steps.
- Evidence of impact and campus champions: the single biggest motivator for departments is seeing that the review had an impact, and that their work was not simply absorbed by the great administrative void. This might mean some sort of commitment to resources based on the review or it might mean validating a department’s seemingly peculiar way of doing things. Regardless, departments respond when they feel heard, and become campus champions for APR when they experience positive results.
Conducting a Transformative APR
In many cases, though, a “good” APR is not good enough. If the necessary changes are substantial enough – rewriting a department’s curriculum, increasing its research output, fumigating a historically toxic department culture – a good APR will provide a nudge but not the push that is needed to make real change. This is where conducting a transformative APR becomes necessary.
Based on my experience, most institutions do not typically conduct transformative APRs, and this can undermine the process’s usefulness for both a dean and a faculty. A transformative APR includes all of the previous elements with a few crucial additions:
- Program-specific questions from the administration to the reviewers: Most APRs include a set of generic questions of often marginal utility. For example, “How well does the department fulfill the mission of the University?” is an important question, but is of limited practical value for departments and administrators. Generic questions will give you generic results. Either instead of, or in addition to, these foundational questions, deans should ask precisely what they and their departments are interested in knowing. For example, we routinely share questions like these with reviewers prior to their site visit:
- Retirements have left this department looking substantially different than it did at the point of last review. Given existing faculty strengths and student demand, in which area(s) would you recommend the next hires? —or— Retirements have left this department looking substantially different than it did at the point of last review. What should we stop doing that we’re currently doing, given the department’s new composition?
- We’re thinking about starting a graduate program in this area. Is that reasonable, based on your perception of the program? What things are we not thinking about right now that we should, in order for the program to be a success?
- Assessment results from X program tell us that students are not getting the writing development they need, but faculty members are strapped for time as it is. Are there meaningful ways we can include writing in the program’s curriculum without overburdening faculty?
- The X department hasn’t had a curriculum revision since 1978. Help.
Thorny questions about personnel problems in a department can be framed in terms of leadership succession, empowerment of untenured faculty, and/or governance and decision-making processes.
- Guided reviewer selection: Once one begins asking specific questions, it becomes more important to select reviewers who can actually answer them. We ask departments to look for professionals who offer (a) a balance between comparable and aspirant institutions; (b) experience with APR reviewing (often disciplinary lists can be helpful here); and/or (c) administrative experience—not surprisingly, the best reviewers are often those who have experience managing people and budgets. In addition, departments might select a reviewer with particular expertise in a discipline (e.g., reforming introductory composition courses, building experiential learning into their pedagogy, creating a new program).
- Managing reviewers’ expectations: Because reviewers are often other faculty members, the most common suggestion in any APR is often “hire more faculty in X department.” While we may agree with that recommendation in theory, in practice limited budgets rarely allow for this kind of expansion. Set expectations up front with your reviewers so as to avoid wasting their time; if all that is possible is a steady state in terms of hiring, then be clear about that right away: “How can we accomplish this with existing faculty and staff expertise?”
- Time and space to reflect: Transformative reviewers’ reports take time to process and to percolate. Very often, at least some faculty will react defensively to the suggestion of big changes. After the dust has settled, provide time and space for departments to get together to discuss the report and agree on an action plan. We typically sponsor a department retreat—either a full day or a half day away from campus for the department to come together and talk about the work ahead. In some cases, we support bringing back reviewers or other experts to facilitate a conversation about particularly tough topics (e.g., wholesale curricular revision) with a department.
- Administrator follow-up: Nothing will kill transformative change more quickly than administrative disinterest. Put in place concrete steps that require both departmental and administrative commitment and encourage departments to create goals that are both measurable and meaningful.
All of these, you will note, require a much more activist role for administrators throughout this process. Like many things, more work up front results in better and more effective results in the end.
APR and Program Prioritization
“Program prioritization” strikes fear into the hearts of most faculty, for good reason. Aggressive program cuts are well-publicized in the profession, and are often an institution’s last resort for addressing recurring budget deficits. Nonetheless, faculty also recognize that we often continue to offer programs long past their prime, and APR provides one way of addressing this. While no department is going to review itself out of existence, my experience indicates that departments routinely decide to discontinue less viable programs or tracks in the wake of a transformative program review. The suggestion to do so often comes from the external reviewers and as such is a much easier pill to swallow than if it comes from the dean—or as the result of being thrown in a room with a copy of Bob Dickeson’s Prioritizing Academic Programs and Services (2010). In our context, APR has led directly to the streamlining or elimination of several programs. In each case, the recommendation from reviewers was that the department should focus on other, stronger elements of its program, and direct resources (most commonly faculty time and energy) to those. In each case, these changes have been a collaborative effort between faculty and administration.
APR FTW (For the Win)!
In closing, the fact of the matter is that APR is an opportunity for a department to talk about its collective work with colleagues who are genuinely interested in it. Far from “evaluators,” most external and internal reviewers are conversation partners and peers who can suggest different ways of looking at situations or bring new knowledge to bear on seemingly intractable problems. APR also provides an occasion for departments to articulate and think about big ideas. It is very easy for the most significant issues facing departments to get buried under attending to day-to-day concerns. As a result, APR stimulates and encourages strategic planning at the department or program level, in a way that aligns with larger university goals. APR helps to prioritize resource needs, including faculty and staff hires, equipment needs, and space requirements. In a situation where “do more with less” is the order of the day, and most requests are valid at some level, APR helps to distill the urgent resource needs from the less urgent. When done well, APR demystifies communication between administrators and faculty members. A transformative review allows each group to view the department from the other group’s perspective and rewards shared achievements post-review.
This essay is based on a roundtable discussion I led at the ACAD Deans’ Institute at the annual meeting of the American Association of Colleges and Universities, January 2019, in Atlanta, Georgia. I have incorporated some information and feedback from those sessions. I am thankful to my lively and engaged roundtable colleagues, and to the staff of ACAD, for the opportunity to share my thoughts here.
Related topics: APR | program review