Institutional research as a bridge: Aligning institutional internal data needs and external information requirements a strategic view

Chester D. Haskell (International Consultant to Higher Education, Fallbrook, California, USA)

Higher Education Evaluation and Development

ISSN: 2514-5789

Article publication date: 7 August 2017

2503

Abstract

Purpose

This paper explores the roles of institutional research (IR) units in higher education, examining both internal and external responsibilities and demands. The purpose of this paper is to encourage a broader strategic discussion of the missions and capacities of such academic institutional entities.

Design/methodology/approach

The methodology employed begins with a review of relevant literature, followed by critical observations of an experienced reflective practitioner. Beginning with the premise that academic institutions are central, the paper discusses the external environment of institutions and the requirements placed on their internal IR operations. A core question is presented: research for whom? Both traditional and alternative organizational models are discussed in this light. The paper then explores ways in which data needs might be aligned in order to provide accountable, useful and transparent information to all stakeholders, internal and external.

Findings

Findings show that the linking of internal information needs with those of external actors is key to effective operations; that IR units should seek to be a bridge between their institution and its environment so that effective information can be provided to all who need it. The paper is not designed as a detailed operational roadmap, but rather to highlight issues for examination within the context of specific institutional and agency situations.

Originality/value

Its originality stems from the focus on such linkages and the call for organizational leaders to recognize the full value of IR both within and across organizational boundaries.

Keywords

Citation

Haskell, C.D. (2017), "Institutional research as a bridge: Aligning institutional internal data needs and external information requirements a strategic view", Higher Education Evaluation and Development, Vol. 11 No. 1, pp. 2-11. https://doi.org/10.1108/HEED-08-2017-001

Publisher

:

Emerald Publishing Limited

Copyright © 2017, Chester D. Haskell

License

Published in the Higher Education Evaluation and Development. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


As is the case in most modern arenas, the capacity for collecting and disseminating data about higher education is growing rapidly. New technologies, new software, new analytic methodologies and new approaches combine to create the potential for exponentially larger amounts and types of data that might be used to inform all interested parties.

Simultaneously, external demands for information about higher education also are expanding tremendously. Governments require more and better information about institutional performance and outcomes. Accrediting bodies seek data to improve their capacity for oversight, verification and quality assurance. Prospective students and their families look for information about programs, costs and what their investments will purchase. Employers seek greater alignment of student capabilities and employer needs.

This confluence of expanding capabilities and increasing demands puts great pressure on those tasked with gathering and analyzing data and reporting results to stakeholders. In most academic institutions, these responsibilities are placed primarily on an internal organizational unit, typically referred to as “institutional research (IR).”

This essay will explore the changing roles, responsibilities and opportunities for such functional units, paying particular attention to both internal and external information needs and the degree to which these units are integral to the larger institution. It will argue that IR must serve both functions by linking institutional requirements to external assessments and public information. This will be done from the perspective of a reflective practitioner with extensive institutional and accreditation experience internationally who wishes today’s information tools had been available in the past.

Discussions about the nature and function of IR hardly are new. Important contributions are decades old. For example, Fincher (1985) explored the “art and science of IR” in a chapter of Peterson and Corcoran (1985). Knight et al. (1997) discussed the knowledge and skills needed to be effective, while Terenzini (1993, 2013) published not one, but two, seminal articles 20 years apart on what constitutes IR and the capacities of effective practitioners. However, much of this work focused on defining IR in an operational sense; the nature of its parameters, how it is best conducted, and the skills required of researchers seeking to define a new profession within academe.

Contemporary IR takes a variety of forms in different academic settings. In some cases, such research is used mainly for internal assessment and to inform decisions, both strategic and operational. In others, it largely reacts to external demands for information from governments, accreditors and additional stakeholders outside the institution such as students, the public and other institutions. The presumed objective in both instances is institutional quality assurance and enhancement, as well to provide relevant information to all manner of internal institutional actors – senior leaders, student affairs officers, enrollment managers, financial aid officials and more. The fact is that all parts of an academic institution need data to fulfill their functions.

The centrality of academic institutions

In the first instance, it must be recognized that institutions are the true locus of higher education, the place where higher education occurs. Academic institutions are the context for specialized programs. Particular academic programs or degrees rarely occur in an organizational vacuum. This reality is reflected in the vast literature on the nature of higher education. See, for example, Pelikan’s (1992) sweeping chapter “The idea of the university in scholarly literature.”

Even the smallest of academic institutions are complex places. Multiple degrees, disciplines, departments, as well as a range of support and administrative functions, are collected under one institutional roof, typically with singular institutional leadership. Further, Rojas and Bernasconi (2011) make the point that these structural intricacies are further complicated by the “relative weakness of hierarchical power,” the “dense and diverse values system,” and, importantly, the fact the institution always has a dynamic interaction with its various external environments (Rojas and Bernasconi, 2011). Somehow, all these separate subunits, levels, programs and functions must be organized together if the institution is to be effective and successful.

At the same time, the roles and functions of academic institutions are widening in most countries. Massification – the phenomenon of opening of educational opportunities and offerings to much larger segments of a population – means traditional institutions have had to expand, while new institutions have arisen. Calderon (2012) today, there simply are many more students than in the past, as higher education has ceased to be available only to the children of national elites. In addition, these larger student populations are also more diverse in terms of student characteristics and demographics, as they are not limited to the attributes of the elites.

Academic institutions are amazingly diverse worldwide. Different missions, structures, scales and approaches to education provide a range of opportunities for students, while also promoting competition and innovation. Institutions themselves thus need more and richer data that reflect their disparate and evolving realities and situations. IR is essential for all manner of internal purposes and these purposes have become more complicated and disparate.

IR and external environments

Academic institutions also do not exist in a vacuum, instead being part of the larger society (Rojas and Bernasconi, 2011). In most countries, academic institutions are expected to be connected to the community, both the physical community in which they are located and the wider community of society. No longer can universities be isolated ivory towers. Instead, institutions have responsibilities to society, serving constituencies far beyond their gates, including the public and governmental stakeholders that provide support.

A central issue for external stakeholders like government or accreditors is how to assure at least minimum quality in institutions – how to handle institutions that are subpar. As Bok (2013) notes, there are two ways to approach this: accreditation or transparency (He also notes these are not mutually exclusive). This combination of accountability and transparency puts tremendous pressure on any IR unit (Volkwein, 2008). Finally, in addition to the matter of what is to be measured or made transparent, there is the prior question raised by Chen and Haynes (2016): “transparency for whom?”

Governments – national and in some countries, state – recognize the vital role higher education plays in serving national and local developmental and economic needs and thus support some institutions directly (public institutions) and others indirectly through such means as student financial aid, research support, or tax advantages. Governments also have legitimate roles in overseeing and regulating academic institutions. Not only do governments have a duty to make sure public funds are put to the best possible use, but they also have a consumer protection function.

These important functions are sometimes fulfilled by some form of academic assessment and accreditation body. These entities may be directly part of government through a central ministry of education or similar institution. In other cases, they are quasi-autonomous external organizations. And in others these functions are fulfilled by entities largely separate from government, operating quite independently. In each case, they are designed to fulfill government’s consumer protection function, while also implementing government accountability requirements, in addition to objectives of quality assurance and improvement. Finally, in some nations (like the USA) there are multiple structures for fulfilling these functions, such as the Federal Department of Education, and institutional and programmatic accrediting entities. Whatever the structure, all require information about academic institutions.

These external bodies, whether autonomous, independent accreditors or government agencies, typically define many of the roles for IR operations within academic institutions. They require various forms of data such as student enrollments, degrees awarded and student demographic information. An excellent example is the US Department of Education’s Integrated Postsecondary Education Data System requirements placed on academic institutions in the USA (LoGrasso, 2016). In addition to detailed, nationally mandated information, there often are multiple requirements from a variety of other entities, both required and voluntary, sometimes with different formats or methodologies for collection and presentation. Accrediting bodies in the US and elsewhere impose further extensive data requirements as a condition of their own processes for institutional accreditation or reaccreditation.

In addition, there often is another type of external body with information needs that are quite separate from those of either government or accreditor bodies: the institution’s governing board. In the American model, at least, boards of trustees and like bodies are largely composed of external actors who have a fiduciary responsibility for the academic institution (Henderson, 2016). Servicing such an entity creates yet another (and often highly pressured) responsibility for institutional researchers.

The point is that much of the role of an IR office is not related to internal institutional needs, but rather the servicing of a range of external actors. Many such external demands may not be tied to internal goals such as student learning outcomes or institutional improvement. Rather they may be solely constructed to meet the external entity’s perceived needs. To further complicate things, the data requirements not only may be different in content, but also may need to be different in form and presentation to be effective.

These external demands have a variety of impacts on an IR office. The number, type and complexity of data reports may increase. Divergent or inconsistent definitions of data items, content and formatting requirements lead to duplicative demands for data that are structured differently, thus precluding efficiencies. Meeting external data demands means institutions have to dedicate professional staff to this function, often at considerable cost. Further, such imposed costs are highest for the smallest institutions. The staffing needed tends to be higher as a proportion of institutional budgets in the smaller institution.

An additional consideration is the numerous and often conflicting demands of multiple accrediting bodies. Many institutions have both institutional accreditation (accreditation of the institution as a whole, or “registration” as it is called in some countries) and programmatic or specialized accreditation (accreditation of a particular academic degrees or programs). These different accreditors have differing purposes and perspectives, thus their data requirements are different. This situation adds to the responsibilities of an institution’s internal research office.

For example, in Mexico there are multiple forms of accrediting organizations. The Ministry of Education (Secretaría de Educación Pública, SEP) imposes highly detailed requirements on all institutions. Separate programmatic accreditors impose different requirements on different degree programs. Graduate programs are assessed and accredited by a separate agency (Consejo Nacional de Ciencía y Technología, CONACYT) that has its own set of requirements. Some non-public institutions are accredited by the Federación de Instituciones Mexicanas Particulares de Educación Superiór (FIMPES), a voluntary, non-governmental body that is the only true institutional accreditor in Mexico, and thus face yet another data set. In other nations, like Australia, institutions that have both vocational and higher degree programs must meet the requirements of two completely separate regulators, the Australian Skills Quality Authority and the Tertiary Education Quality and Standards Agency. Such complications are not uncommon. While there is considerable overlap of these various demands, the effect on any institution is the need to collect and report numerous specialized data sets and analyses. This pattern is common globally.

Ewell (1998) makes a series of recommendations that attempt to address the realities faced in most institutions. For example, reporting burdens are excessive and should be reduced. Duplication is rampant and should eliminated wherever possible. Multiple databases should be replaced where possible by centralized databases (including those of third parties). At the same time, however, Ewell (1998) also recognizes the need to tailor data reporting to appropriate audiences and the importance of reflecting “systemic as well as institutional perspectives.”

The pressures on external accrediting bodies also are increasing in most nations. Government ministries want more and different data, a situation complicated by constant changes in government directives and in governments themselves. At the same time, accreditors must determine how to address diversity of institutions within a nation or region. How can information requirements be organized to take into account the different scales, missions, ownership or structures of institutions? How can information requirements be structured to reflect effectively the tremendous diversity within academic institutions – how can institutional complexity be captured by information?

The challenge to IR

These considerations beg a core question: IR for whom? What is the real purpose of IR capacity? What are the proper roles of such a function both internally and externally? Is the principal purpose to serve the needs of the institutional decision makers? How are those decision makers defined? How do the internal and external functions align? Are there other considerations? For example, Colombia has instituted a new information model – the Modelo de Indicadores del Desempeño de la Educación Pública (MIDE) – that explicitly states its purpose is to provide accessible information on higher education institutions for students and families (Modelo de Indicadores del Desempeño de la Educación Pública, 2016).

At the same time, the volume and reach of research about higher education issues such as assessing teaching, measuring student learning or improving classroom efficiencies exploded. Even a casual search on a database site like JSTOR unearths literally tens of thousands of studies, books and articles on higher education research and its application within academic settings.

However, much of the emphasis in this literature is about formulating and answering research questions in an institutional setting. It focuses on gathering information to inform institutional decisions, but says little about institutional structures or purposes. Swing and Ross (2016) cite Gagliardi and Wellman’s (2014) study of US public universities to note that IR offices are “deluged by demands for data collection and report writing that blot out time and attention for deeper research, analysis and communication.” Swing and Ross further note that the “dominant structure of IR is based on service relationships with a small set of key decision makers” (Swing and Ross, 2016, p. 7).

In the same vein, the Association for Institutional Research (AIR) did a survey on the common IR output: information dashboards designed to give a snapshot of selected measures of institutional health or effectiveness (Association for Institutional Research (AIR), 2014).

The question “What are the primary audiences for your dashboard?” responses were:

  • campus/institution administrators (91.8 percent);

  • faculty/staff (49.3 percent);

  • general public (19.2 percent); and

  • parents (4.1 percent).

Posed differently, the question “Which best describes who can view the dashboard?” led to the following results:

  • limited to select campus administrators only (31 percent);

  • internal to the campus – staff/faculty/administrators only (23.9 percent); and

  • open access (15.5 percent) (AIR, 2014).

Clearly, there is no standard definition of information users or stakeholders. In fact, most of the dissemination of such dashboard information seems limited to internal institutional users.

A variety of structures for IR units are in evidence in the USA and elsewhere. In many cases, the IR office reports directly to the institutional leader (president). In other cases, the reporting structure is through a provost or vice president for academic affairs. In either case, the office often is not seen as central to institutional operations or priorities. While, there are examples, such as Australia, where these offices operate as a core and integrated institutional function, it is not commonly the case elsewhere. Rather, as in the American context, IR is more likely viewed as a staff function serving only the leadership. Any impact on the broader institution is only by the direction of the leadership.

Typically, the IR office collects as much data as possible, much of it through various survey instruments. The other principal method is garnering data from other offices that collect specialized data such as student enrollments from a registrar’s office or administrative staffing patterns from human resources. Less common are more in-depth data collection methods such as focus groups, follow up interviews. Large amounts of data are collected from these and related sources, but its impact may be negligible unless leadership permits or specifies dissemination.

In this instance, IR is, in effect, a fully integrated component of a top-down, hierarchical organizational model. The problem is that this model can be a trap. If information is designed primarily to serve the top leadership and only reaches the rest of the organization through the leadership, then it cannot be fully utilized.

This model also means that external accreditors or government agencies have only a narrow keyhole through which to understand the institutions. The external agency requests (or requires) certain kinds of data. Some of these are regularized; for example, standard information on enrollments, retention or finances. Others are more in-depth, such as the volumes of information normally required in institutional review self-studies. In either case, the information gathered and reported is selective; it is structured by what the external agency defines as necessary. It is never comprehensive, in part because the external agency cannot be comprehensive in its assessment of institutions. There is no agreement on how to define quality in diverse institutions.

Another aspect of this model is that the external accreditors have a relationship keyhole, as well. The accreditors typically deal only with the institutional leadership or the designated institutional representative. The agency cannot deal openly with broader sources of information. Further, access to institutional data is usually jealously guarded and limited with the institution taking the position that data should restricted and not easily proffered.

An alternative model is one where the entire administrative organization – president to janitors – exists mainly to provide the best possible environment for the faculty and students to work and learn together. In this model, information is needed by and provided for all institutional stakeholders. Such an open data model is also designed to better serve accreditors. In this model, information of all sorts is continuous and ongoing, not just focused on specific targets or outcomes. There are several other alternatives under discussion among IR professionals including matrix models and federated organizational structures (Swing and Ross, 2016).

Data should be designed to be useful for practical real-time engagements, while still protecting individual privacy. Properly designed, it should lead to direct and timely advising or interventions to support students. In this form, data are not a means to themselves, but tools to assist others, recognizing the importance of the personal touch by faculty or staff in support of students. IR units typically are tasked with preparing central reports providing evidence of performance in key areas such as teaching, learning and research. In addition, these units should be the locus of some form of data warehouse that is available to staff and faculty. These internal data and the reports produced from them can then be utilized to meet the various external reporting requirements put upon the institution by governments, accrediting bodies or others.

The real challenge is how to gather and organize data so information can be made accessible and useful to support the full range of an institution’s functions and mission. In other words, how can data be organized to best meet the complete scope of internal information required for actions and decision making? And, at the same time, how can the same data be organized to provide the information required by accreditors and other external stakeholders?

Aligning internal and external data needs

In other words, external data requirements should be aligned with institutional data needs. External agencies should seek few, if any, data not also useful for internal institutional requirements. After all, institutions and accreditors have shared interests in information.

One approach to such problems might be to impose consistent data requirements for every institution. This, however, leads to one of two problems. Institutional diversity usually is seen as a good thing, reflecting divergent academic traditions, approaches or missions. Institutional diversity is also seen as valuable for providing choices for students, encouraging competition and facilitating innovation. Yet, limiting diversity in favor of consistency or standardization threatens isomorphism, the tendency for institutions to become more alike as they try to conform to external incentives or pressures (Powell and DiMaggio, 1991). Alternatively, the recognition of the value of diversity means that data requirements (and, indeed, almost all accreditation standards) must be set at a threshold or minimal level, thus permitting a diversity of approaches while simultaneously complicating the understanding of institutions in the aggregate.

It also must be remembered that the vast bulk of the literature, especially with regard to the emerging profession of institutional researchers, is based on the US higher education experience. While there are effective IR operations in most universities in places like Western Europe, Australia and Japan, the field and functions are less developed elsewhere. Nevertheless, interest in IR, both on the part of individual academic institutions and on the part of external bodies like accreditors and governments is growing rapidly, as higher education globally is in many ways tending toward an isomorphic convergence largely aligned with the perceived models of US accreditation and higher education in general. Put differently, everyone recognizes the relationship between good data and data analysis and higher education quality improvement. However, it is not clear that the so-called “American model” is appropriate for all situations.

The demands on external government and accrediting bodies are many. Not only do such organizations need data sufficient to fulfill their institutional assessment mandates, but they also need to be able to aggregate institutional data for broader purposes, including the formulation of national policies. Such data also are essential for societal purposes, as they are the basis for aggregate indicators, as well as for purposes of benchmarking. Also, there are numerous data points that are meaningless if not brought together, for example, how to address the challenges of students who attend multiple institutions or who have non-continuous academic records. Non-traditional students are often another challenge, as are new, innovative programs. And the growing number of international students, now estimated at more than five million and growing rapidly, adds yet another degree of complexity (Project Atlas, 2016).

There indeed are powerful new technologies and techniques with potential for important and positive impacts. Big data, data analysis, predictive analytics, educational data mining and enterprise resource planning are manifestations of these new technologies. In theory, such tools should enable institutions or accrediting bodies to collect tremendous amounts of real-time data and from it make useful information for planning and decision making. Some institutions see such capacity as a way to better manage data already collected. Others want to explore these resources as ways to expand and improve their institutional effectiveness through direct, timely and actionable data. In any case, the problem is that the advantages of these new tools are often offset by considerable institutional expenses (in technology, professional staffing, vendors), by the often steep learning curve of senior administrators and by potential threats to student privacy. The problem is not the means for data, but, rather, the strategy and capacity for utilizing data.

External accreditors may be able to assist institutions by providing information about best practices or through finding ways to reduce or share costs, especially for smaller institutions. At the same time, external accreditors should require transparency from all institutions. Transparency and access to data serve both the consumer protection function and the accountability due to funding sources and governments. Finally, external accreditors should work with the institutions they accredit to assure a balancing of accreditor and institutional data needs.

Accreditors and institutions also share a basic problem. The reality is that there are no commonly accepted definitions or measurements of institutional quality. “Whose quality?” is a common question? Another reason accreditation is largely a minimalist; threshold exercise is that there are no commonly defined standards of quality (Reisberg, 2011). Accrediting bodies worldwide place great emphasis on “quality processes,” but there is little agreement on what constitutes a quality outcome.

Furthermore, in many circumstances there can only be proxy measures of quality. One of the dangers confronting IR is measuring those things that can be measured while underemphasizing those things that cannot. This is the methodological problem with commercial rankings. In such cases, an indicator such as research productivity is calculated by the number of citations in selected journals. As there is no true measure of research quality, the number of citations becomes a proxy, a number assumed to measure something that can be plugged into an algorithm. Crafting a ranking of institutions requires numerous such proxy assumptions, thus multiplying the likelihood of inaccuracy at every level (Hazelkorn, 2015).

At the same time, a look at characteristics of the leading academic research institutions shows a remarkable consistency. As noted by Bloom and Rosovsky, the best institutions all have:

  • an ongoing internal culture of quality useful and appropriate to that institution;

  • sufficient data for all decisions (at all levels) within norms of internal and external accountability;

  • regular internal testing of institutional definitions or standards of quality; and

  • rigorous internal processes for meritocratic decisions (Bloom and Rosovsky, 2011).

Such elite institutions do not engage in these sorts of activities because they are required to do so by governments or by external accrediting bodies. Rather, they do so because they understand that defining quality or excellence for themselves and then having the information and processes for constant assessment is the key to maintaining their own standards and staying competitive with other like-minded institutions. It seems clear that the pursuit of quality must incorporate clear and useful data for both decision making and accountability. However, as Eaton (2015) notes, an institution’s responsibility for its own quality is a cornerstone of effective quality assurance.

The US-based AIR is the most prominent professional association for individuals in this complex field (There are also a number of AIR spinoffs in other regions, such as those in Europe and South-East Asia). Last year, the association made a series of recommendations for more integrated IR. In their “Statement of Aspirational Practice for Institutional Research” it was posited that students, faculty and staff members all need to be viewed as IR stakeholders with data needs, in addition to the more traditional institutional leadership. Such a statement suggests movement toward a model of IR that provides useful information for all stakeholders and does so in a timely fashion (AIR, 2016).

Conclusions

It also can be argued that the AIR approach does not go far enough. Indeed, this paper argues that the aspirations for IR should also be to engage and include external stakeholders. There should also be greater emphasis on serving the data needs of accreditors, other external bodies and, indeed, the public at large.

IR must not be seen as having solely internal institutional functions, even when those functions involve responding to external demands for information. Rather, the internal and external roles of IR are not separable. Internal and external stakeholders have a broad shared set of purposes, including quality assurance and improvement. At the same time, there are opportunities for two-way learning in the form of information sharing, dissemination of best practices, and shared approaches to problem solving.

The fundamental challenge for those engaged in IR is becoming proactive leaders in data provision and utilization. All societies need meaningful and effective ways of measuring and assessing quality in academic institutions. All stakeholders, internal and external, have a common interest in having access to useful information. The linking of internal interests and information requirements with external interests and needs is an opportunity to improve transparency, accountability and the making of better decisions by all. IR should be seen as a bridge, not a keyhole, and should be fully supported and integrated into the internal institution, while also being recognized by external actors for playing a vital role. Only in these ways can the full power of information be placed in the hands of all who need it.

References

Association for Institutional Research (AIR) (2014), “AIR survey”, December 14, 2012-January 4, 2013, available at: www.airweb.org/eAIR/Surveys/Pages/NationalDataQuality.aspx (accessed February 6, 2017).

Association for Institutional Research (2016), “Statement of aspirational practice for institutional research”, available at: www.airweb.org/Resources/ImprovingAndTransformingPostsecondaryEducation/Pages/Statements-of-Aspirational-Practice-for-Institutional-Research.aspx (accessed December 15, 2016).

Bloom, D.E. and Rosovksy, H. (2011), “Unlocking the benefits of higher education through appropriate governance”, in Altbach, P. (Ed.), Leadership for World Class Universities: Challenges for Developing Countries, Routledge, New York, NY, pp. 70-89.

Bok, D. (2013), Higher Education in America, Princeton University Press, Princeton, NJ, pp. 402-403.

Calderon, A. (2012), “Massification continues to transform higher education”, University World News, No. 237, September 2, 2012, available at: www.universityworldnews.com/article.php?story=20120831155341147 (accessed April 30, 2017).

Chen, P.D. and Haynes, R.M. (2016), “Transparency for whom? Impacts of accountability movements for institutional researchers and beyond”, in Powers, K. and Henderson, A. (Eds), Burden or Benefit: External Data Reporting, New Directions in Institutional Research, Vol. 166, pp. 15-19.

Eaton, J. (2015), An Overview of US Accreditation, Council on Higher Education Accreditation, Washington, DC, p. 3.

Ewell, P. (1998), “Achieving high performance: the policy dimension”, in Tierney, W. (Ed.), The Responsive University: Restructuring for High Performance, The Johns Hopkins University Press, Baltimore, MD, pp. 156-157.

Fincher, C. (1985), “The art and science of institutional research”, in Peterson, M.W. and Corcoran, M. (Eds), Institutional Research, Vol. 46, John Wiley & Sons, San Francisco, CA, pp. 17-37.

Gagliardi, J.S. and Wellman, J. (2014), “Meeting demand for improvements in public system institutional research: progress report on the NASH project”, National Association of System Heads, Washington, DC.

Hazelkorn, E. (2015), Rankings and the Reshaping of Higher Education, the Battle for World-Class Excellence, 2nd ed., Palgrave Macmillan, New York, NY.

Henderson, A. (2016), “The growth of burden in federal and state reporting”, in Powers, K. and Henderson, A. (Eds), Burden or Benefit: External Data Reporting, New Directions in Institutional Research, Vol. 166, John Wiley & Sons, San Francisco, CA, pp. 22-28.

Knight, W.E., Moore, M.E. and Coperthwaite, C.A. (1997), “Institutional research: knowledge, skills and perceptions of effectiveness”, Research in Higher Education, Vol. 38 No. 4, pp. 419-433.

LoGrasso, M.F. (2016), “Easing the burden of external reporting”, in Powers, K. and Henderson, A. (Eds), Burden or Benefit: External Data Reporting, New Directions in Institutional Research, Vol. 166, John Wiley & Sons, San Francisco, CA, pp. 52-58.

Modelo de Indicadores del Desempeño de la Educación Pública (2016), available at: www.colombiaaprende.edu.co/html/micrositios/1752/w3-propertyname-3214.html (accessed January 6, 2017).

Pelikan, J. (1992), The Idea of the University: A Reexamination, Yale University Press, New Haven, CT, pp. 190-197.

Peterson, M.W. and Corcoran, M. (Eds) (1985), “Institutional research in transition”, New Directions for Institutional Research, Vol. 46, John Wiley & Sons, San Francisco, CA, pp. 17-37.

Project Atlas (2016), “Global mobility trends”, available at: https://p.widencdn.net/hjyfpw/Project-Atlas-2016-Global-Mobility-Trends-Infographics (accessed March 4, 2017).

Reisberg, L. (2011), “Where the quality discussion stands: strategies and ambiguities”, in Altbach, P. (Ed.), Leadership for World Class Universities: Challenges for Developing Countries, Routledge, New York, NY, pp. 128-144.

Swing, R.L. and Ross, L.E. (2016), “A new vision for institutional research”, Change: The Magazine of Higher Learning, Vol. 48 No. 2, pp. 6-13.

Terenzini, P.T. (1993), “On the nature of institutional research and the knowledge and skills it requires”, Research in Higher Education, Vol. 34 No. 1, pp. 1-10.

Terenzini, P.T. (2013), “‘On the nature of institutional research’ revisited: plus ca change…?”, Research in Higher Education, Vol. 54 No. 2, pp. 137-148.

Volkwein, T.R. (2008), “The foundations and evolution of institutional research”, New Directions for Higher Education, Vol. 141, John Wiley & Sons, San Francisco, CA, pp. 5-20.

Further reading

DiMaggio, P.J. and Powell, W.W. (1991), “The iron cage revisited: institutional isomorphism and collective rationality in organizational fields”, in Powell, W.W. and DiMaggio, P.J. (Eds), The New Institutionalism in Organizational Analysis, University of Chicago Press, Chicago, IL, pp. 63-82.

Rojas, A. and Bernasconi, A. (2016), “Governing universities in times of uncertainty and change”, in Altbach, P. (Ed.), Leadership for World Class Universities: Challenges for Developing Countries, Routledge, New York, NY, pp. 33-51.

Corresponding author

Chester D. Haskell can be contacted at: Chet.haskell@gmail.com

Related articles