Improving Strategic Indicators

Dan Nugent, Rob Crane, Anna Griswold

The Quality Advocates meeting on January 20, 2006 focused on the development and use of strategic indicators. Some of the topics addressed by panelists included: 1) What makes a good indicator? 2) How do indicators support decision-making? and, 3) How do you determine indicators for new initiatives, for example, student-centeredness?  The panelists included: Anna Griswold, Assistant Vice President for Undergraduate Education and Executive Director for Student Aid; Robert Crane, Associate Dean, College of Earth and Mineral Sciences; and Daniel Nugent, Management Information Associate, Office of Planning and Institutional Assessment.

Dan NugentDaniel Nugent began the session by providing some background on the University’s strategic plan and use of strategic indicators. The current strategic indicators are used by the University to track performance towards the goals outlined in the strategic plan and were developed through a collaborative effort in 1998. They have not changed substantially since then. As part of the annual review of the strategic indicators report, the Office of Planning and Institutional Assessment wanted to find out if the report could be improved. Office staff met with the persons who provided the data and talked with them about whether the data portrayed accurately the state of the University, and whether the report was understandable to the University community and other readers.  As a result of these interviews, several new indicators may be added to the report, and the layout may be enhanced with text to provide some explanation of the trends apparent in the indicators.

Anna GriswoldAnna Griswold next explained how the Office of Student Aid uses strategic indicators in its work. The Office places great emphasis on its tracking of strategic indicators because it is high stakes in terms of management and accountability of funds and the Office’s goals and performance are integral to helping the University meet its enrollment goals.  Performance indicators have become integrated into the work of the Office in both formal and informal methods.  Many of the strategic indicators are dynamic, changing as different needs arise, and they provide the basis for both operational and policy decisions. Operationally, the indicators help the Office to determine whether they are on track in meeting student needs, while on a larger level, the indicators are used to identify significant policy issues and challenges.  The indicators tell the Office whether they need to work on an area or are successfully achieving it.

The Office has recently begun to look at sets of indicators, rather than rely on single indicators.  For example, one objective of the Office of Student Aid is not only to get students into the University, but also to facilitate their completion, so the Office wants to look at students applying for aid by household income and whether that is related to graduation rates. At the policy level, the Office has used strategic indicators to come up with strategic goals and then developed indicators to track this.  For example, Enrollment Management set the goal of making Penn State education affordable based on their analysis that tuitions were increasing, competition was increasing, and grant monies were declining.

Sometimes there is no direct line between the goal and the indicator.  Griswold gave the example of developing indicators for their goal of student-centeredness. In conceptualizing student-centeredness, the Office realized that the actual process of applying for financial aid can be a barrier for students. They focused on developing indicators to assess the speed and efficiency of their processes and they monitor backlog and processing times. In developing indicators, the Office of Student Aid looks at how they can improve a process, what students think, and what types of complaints the Office is receiving. Nugent pointed out that the Office is integrating its day to day operational processes into the support of its strategic goals.

Robert CraneRobert Crane spoke about the experience of the College of Earth and Mineral Science in developing indicators. Crane spoke first about what makes a good indicator. Crane felt the characteristics of a good indicator depend on its purpose: one purpose may be to support a statement or image while another purpose may be to support decision-making. Another distinction is whether the indicator is used for internal or external purposes. In addition, Crane pointed to the ease with which indicators can be developed and noted that sometimes we choose indicators for which data are available, and that these indicators don’t always measure what we really want to measure.

The College uses its performance indicators in decision-making and Crane gave two specific examples. The first related to the College’s broad goal of being the most student-centered college in Penn State history. One indicator they use to measure this is the number of senior faculty in the classroom. Several years ago, the College made a decision that more senior faculty should be teaching, and it is tracking its performance in this area. This was a deliberate policy move, and the College can track its progress toward the goal through its strategic indicators, but it also uses them to garner support from faculty for their previous decision.  Another example can be found in the “state of the college” report. This report shows EMS has done very well in fundraising, but mainly it has come from foundations and grants, with little growth in the endowment. Thus, the College is changing tactics in this area to find other sources which will lead to a larger endowment.

Crane acknowledged that determining new indicators can be a difficult process because sometimes the data are not available, or are not easily available. To illustrate this, Crane referred to the College’s goal of being student-centered.  The College uses indicators like student satisfaction as a measure of their performance in this area, but another indicator, which would be even better, would be the level of student involvement. Measures to assess this area, such as the number of students coauthoring papers or involvement in research activities, are hard to come by. According to Crane, “the more useful the data are, the harder they are to collect.” To address this, he suggested that units prioritize their primary goals and invest the resources necessary to assess performance towards these goals.

AttendeesSeveral members of the audience asked questions after the panelists' presentations.  In reference to the difficulties in obtaining useful data, one audience member asked how to overcome the barriers and develop good indicators that meet both the University’s needs and unit needs. Crane responded that units need to work from the bottom up by establishing priorities, collecting meaningful information, and then having that information filter up. Griswold corroborated this by noting that several times she has used her indicators, including benchmarking with other schools, in presentations to the Board of Trustees and to departments across the University to show how student aid affects the student experience, in order to build support for student aid initiatives. Nugent added that one step units can take is to address these types of questions during their strategic planning retreats. Units can identify what they should be measuring, and what they should track to assess progress against the goals. Similar to continuous quality improvement activities, the process of goal setting is top down from leadership in terms of setting priorities and goals, but on a day-to-day basis, it is the folks on the front line who need to make the daily decisions on whether they are meeting the goals.

Small groupA second participant echoed the problems that campuses especially face in creating strategic indicators which meet both the University and the campus needs. Sometimes the University strategic indicators are actually operational and not relevant to the campuses. Nugent responded that in contrast to the business world, which has specific outputs, sometimes it is difficult to determine indicators that measure real outcomes for our students. Instead the University may just be tracking outputs, which are much more operational, with the assumption that the outputs are really related to the outcomes, such as student achievement after graduation.  Griswold also noted that sometimes you can be strategic about operational matters and that operational indicators can lead to effects on the larger scale.

Another audience member asked about the use of indicators for external versus internal uses. Crane responded that sometimes it is the same data, but used in different contexts.  For external audiences especially, it is important to tell the reader what the data are saying and why it is important enough to be included. Another difference between internal and external data is that internally, much more information can be reviewed, whereas this would be overwhelming to outside audiences.

Large groupInformation provided by the panelists includes:

    The Quality Advocates Network meets several times each semester to share ideas and examples of improvement and change. To join the Quality Advocates Network mailing list or to learn more about the meetings scheduled, contact the staff at psupia@psu.edu.

    The Quality Advocates Network is open to all Penn State faculty, staff, administrators, and students.

     

    TOP

Search

Print-friendly version

Contact

Office of Planning and Institutional Assessment
The Pennsylvania State University
502 Rider Building
University Park, PA 16802-4819
Phone: (814) 863-8721
Fax: (814) 863-7031
Email:

Copyright 2006-2014 The Pennsylvania State University

This publication is available in alternative media upon request

Questions regarding web issues, please contact psupia@psu.edu

Web page last modified 2013-03-04