When it’s time to start preparing for accreditation, California community college administrators may be on edge because they are at a higher risk for sanction than any other community college in the nation (California State Auditor, 2014, p. 3). For institutions both large and small, getting ready for an accreditation review has become an exponentially burdening process of collecting data, analyzing trends, and reporting climates and outcomes inside self-study measurements designed to somewhat market the institution as worthy of reaffirmation (Shibley & Volkwein, 2002). State regulations stipulate that the Accrediting Commission for Community and Junior Colleges, Western Association of Schools and Colleges (ACCJC-WASC), a non-profit organization comprised of representatives from accredited community colleges, is the single accreditor of community colleges in California. This puts enormous pressure on a single entity to ensure to the public that each of the State’s 112 community colleges are qualified to receive Federal financial aid, award degrees and certificates, and participate in transfer articulation agreements.

In addition to this, one of the results of ACCJC’s monopoly on accreditation is that community colleges in California are guided by a single set of criteria when planning their college’s processes. The ACCJC publishes a list of Accreditation Standards that are intended to serve as a springboard for institutional dialogue effectuated through a college’s self-evaluation report. Ironically, there are no characteristically objective (i.e., quantitative) metrics with which to measure one’s institution that directly pertain to accreditation preparedness, despite the overwhelming necessity for institutions to maintain their accreditation. Though the ACCJC enumerates guidelines for everything from financial planning integration to elements of a college’s mission statement, the reliance on “qualitative opinions derived from quantitative data” (Roose & Anderson, 1972, p. 115) may render the process of self-evaluation – and indeed the accreditation process as a whole – as ultimately subjective.

Therefore, an opportunity to develop a metric that quantifies a college’s accreditation preparedness presents itself. This metric would essentially be a pre-accreditation strength score that offers a quantitative component to the self-evaluation process, providing institutions with a critical, data-driven framework with which to address how well the institution 1) meets certain guidelines as set forth by a governing body, 2) compares to other institutions in the region, and 3) compares to other institutions in the country. With this opportunity comes the bifurcation of the accreditation process into two components: assessment of the process, which occurs in the traditional, self-evaluative sense of accreditation, and assessment of the product, which is the evaluation of how well a college performs the functions of a college.

The following questions are presented for further research:

What is a college’s product and what is the best way to measure it?

The current process of accreditation serves not to measure but rather to explain, and without measurement there is no objective quantification of an institution’s ability to meet its mission. How can we 1) define what an institution ought to do (i.e., what is their product), and 2) determine the best way to measure this product? Contrastingly, is this an appropriate way to measure an institute of higher education?

My assumption here is that there is a desire to have a metric or even set of descriptive statistics with which a college can base their self-evaluation. This is somewhat eluded to in my field notes generated through discussions with institutional leaders. These grounded theory approaches to collecting data on accreditation preparedness has yielded three overarching themes: frustration with the inability of academic leaders to accurately predict accreditation strength; the recognition of accreditation in its current form as subjective and thus favoring those institutions that have the capacity and wherewithal to perform rigorous qualitative assessments; the desire for a wholly objective, peer-reviewed accreditation process that is separate from any one governing commission.

What quantitative metrics are used to evaluate institutions of higher education?

There are many different management information systems and data repositories used by higher education institutions, including, but not limited to: statewide repositories used by the offices of chancellors or other education leaders; district information systems hosted on site or through a third party; systems at the National Center for Education Statistics (NCES); the Integrated Postsecondary Education Data System (IPEDS); accreditation commission reporting data and data aggregation systems. Identifying these databases and determining what their efficacy is in the accreditation process may produce insight into whether current data collection and reporting processes are sufficient enough to build an objective picture of institutional strength.

As someone who works in research and technology in higher education, it is my assumption that all leaders are at least somewhat familiar with the data repositories that are provided to colleges and universities. It is also my assumption, however, that these repositories largely go under-utilized, as both experience and interview data has shown a disparity among the data collected and reported by institutions and the nature of college planning and program review. This alludes to an aspect of this topic that could have multiple interpretations, as what datasets should be used for what is highly debatable in many regards.

What quantitative element(s) could provide an objective link between federal funding and accreditation?

Accreditation is how the federal government green lights federal funding for colleges and universities. When the Higher Education Act of 1965 passed, requiring institutions seeking federal student aid for their students to have and maintain accreditation, the federal government implicitly endorsed self-regulatory accreditation bodies as being capable and trustworthy gatekeepers to federal funding (Areen, 2011). By working together, institutions and accreditors would assure the public and the government of academic quality accountability, and under these auspices, colleges and universities would be extended the opportunity to receive federal student aid.

Since a qualitative methodology puts more emphasis in persuasive rhetoric than it does in objective data (Smaling, 2002), the current accreditation process may rely too heavily on subjective analysis than on objective outcomes. At present, there is no metric available to give education administrators a clear picture of their institution’s preparedness for an upcoming accreditation process. This results in a one to one and a half year focus on preparing for an upcoming accreditation visit; many leaders interviewed have expressed the desire to have accreditation preparedness to be an ongoing process, but have indicated that there are fundamental, institutional issues with doing this. Future research in the form of additional interviews or a survey could lead to insight about whether or not education leaders had anticipated negative accreditation outcomes and what an ongoing accreditation process may have done to address them. Additionally, data mining and knowledge discovery processes could help address the research challenges presented herein.

A central concern is whether there exists a set of aggregate quantitative data that could potentially facilitate a new accreditation preparedness metric. There is of course a personal interest here for all those involved in institutional effectiveness because identifying a new metric for measuring an institution’s ability to meet the needs of its community as an institution of higher education would greatly enhance the accreditation planning process. In my own personal job, I end up having to rely on professional judgment in accreditation planning rather than objective measures, turning the accreditation planning process into an exercise in forensics rather than a reporting process. This is a problem in colleges across the country, with leaders from different schools often times at odds when it comes to interpreting accreditation standards. It is in this research challenge that the connection between how to plan for accreditation, what data we currently use for accreditation, and what data we should use for accreditation is imputed.

What is an objective picture of an institution is fuzzily defined at best. While field notes have yielded conflicted opinions on what constitutes success in a higher education environment, most academic leaders have agreed that student outcomes, defined here as the relative grades and performance of college students, and student success, defined here as the attainment of institutional student goals (e.g., the conferral of a degree or certificate or the successful transfer to a 4-year university), are paramount to judging whether a college should be accredited or not. I share this opinion and I think this perspective will aid in the discovery of how a leader’s perspectives on the accreditation process shape her or his college’s strategic planning efforts and operations.

My own assumptions on this topic reflect those of other institutional leaders: there is an alarming lack of research on accreditation in the United States, necessitating greater attention to how colleges are preparing for accreditation and how accreditation planning drives college strategic planning as a whole.

Additionally, since the ACCJC’s high sanction rate may be due in part to its short accreditation cycle – six years versus anywhere between seven and 10 years for other accreditors – college strategic planning may require greater emphasis on accreditation preparation. The ACCJC does not provide feedback on a pre-accreditation institutional self-study, so institutions do not have an opportunity to address commission concerns before an external evaluation team comes in and performs a comprehensive accreditation review. A system that enabled colleges to assess their pre-accreditation strength would aid the strategic planning process and may even provide guidance for macroscopic planning effort.

A Review of Relevant Literature

This literature review is bifurcated into two sections around two main underlying assumptions that are presented in the sources. First, institutional sanction is a specific outcome of subjectively poor strategy and misappropriated college planning resources, and serves as a way to measure whether a college is “up to par” or not with regard to institutional effectiveness practices and planning. Second, institutional evolution is the process through which a college can either attain or avoid institutional sanction. This literature review is brief, however it serves to introduce a larger scope of future literature review and scholarship.

Institutional Sanction

In a report to the governor of California and state legislative leaders, California State Auditor Elaine Howle (2014) expressed several concerns about these harsh policies and practices. Four in particular stand out: First, it was discovered that, while the ACCJC describes its obligation to facilitating a transparent accreditation process, much of the commissions deliberations occur behind closed doors; Second, the process of appealing an ACCJC decision does not afford an institution to introduce new evidence – that is, if the commission decided to terminate accreditation for an institution due to not having met a set of recommendations, the institution has no recourse to show that processes were in place and movement has been made toward the meeting of said recommendations; Third, a longitudinal study of all accrediting bodies across the nation revealed that the ACCJC sanctions colleges at a “significantly higher rate” than any other regional accreditor in America; Fourth, it was discovered that there is much room for improvement at the California Community Colleges Chancellor’s Office (CCCCO) in terms of monitoring colleges across the state and identifying those institutions that may be at risk of sanction or even revocation of accreditation from the commission.

For many involved in the accreditation process, these implications may be alarming. For example, while over 80% of the institutions accredited by the ACCJC are public California community colleges subject to strict standards of process and information transparency than private educational institutions, the ACCJC does not post the process of deliberation with regard to possible sanctions on a college to the public. Additionally, only 62% of respondents to a statewide survey believed that the commission’s decision-making processes were suitably transparent, which implies a shockingly high minority of respondents who feel that the ACCJC is not as transparent in their decision-making as they should me (California State Auditor, 2014, p. 2).

Interestingly, the State Auditor’s office also discovered a relationship between the commission’s membership and the colleges they sanction. Between January 2009 and January 2014, only 2 of the 14 institutions that received a sanction had staff members on the commission. While this does not correlate to any wrongdoing per se, without an openly transparent forum in which to view the process of accreditation review there could be grounds for public and institutional skepticism about the efficacy — and ethicality — of the institution’s decisions.

It is this lack of transparency combined with the concerns about an inability to produce new evidence when accreditation is revoked that prompted legal action against the ACCJC in 2014.

Institutional Evolution

What lies beneath the ACCJC’s evolving approach to institutional effectiveness is a growing concern for data-driven, evidence-based planning and operations. For example, strategic planning efforts have begun to focus on the disaggregation of student achievement metrics to catalyze equity-driven program review. When the climate is fertile for cultural change and there is a force to propel it, as has been the case in California with the ACCJC, true institutional evolution can begin to take place.

The epistemological roots of attitudinal shifts in community colleges toward evidence-based planning and effectiveness has many perspectives. Eaton (2007) explained that state and national government entities combined with accreditation commissions have been increasing pressure on community colleges to develop and implement systematic approaches to academic and operational planning, institutional decision-making, and program evaluation. By adopting such frameworks, Baker & Sax (2012) emphasized that higher education leaders can improve institutional effectiveness while latently improving student achievement goals, despite compliance with increasing demands by the Accrediting Commission for Community and Junior Colleges (ACCJC) making the process of institutional evolution challenging.

For any framework to be useful, it must be sensitive to institutional milieu. Organizations have a “cultural DNA” (Schein, 2004, p. 21) that requires sustained, productive momentum for any operational or administrative evolutionary process to occur. Tierney (1988) notes that this is especially true for colleges, and that all institutions of higher learning use a similar framework with which to construct that cultural DNA. In addressing the how of changing institutional culture, Bergquist (1992) goes a step further by developing out a conformable framework as a set of four cultural archetypes — collegial, managerial, developmental, and negotiating. Similarly, Kezar and Eckel (2002) purport that institutional evolution requires five components: support from senior administrators, collaborative leadership throughout the college, a robust evolution plan, proper staff development, and visibility of actions and their outcomes.

Bailey & Alfonso (2005) describe evolutionary processes as a shift from a culture of anecdotal analysis toward a culture of evidence. Institutions are encouraged to improve student achievement through the development, deployment, and assessment of empirically-based research instruments that then drive the institutional decision-making process. Just like how biological evolution is a whole-organism process, the successful integration of a metrics milieu requires full spectrum administrative and academic support, adequate enterprise resource planning, and enough time for the change process to take place.

Counter-attitudinal scholarship points to macro-level sampling in effectiveness studies when outlining how the rhetoric of institutional effectiveness evolution is not absolute. Skolits and Graybeal (2007) point out that local culture influences on institutional effectiveness are rarely presented nor accounted for, eluding to the necessity for more holistic, single-sample studies to approach institutional effectiveness change and administration with a phenomenological lens. However, a phenomenological approach — even with sensitivity to local culture — violates the statistical necessity for variations and quantity in sampling size for confident generalizations (Plummer, 1983).

Some foundational elements

A foundational element that I have discovered in the literature on California community college accreditation by the ACCJC is an area of inquiry related to accreditation metrics. As it turns out, though the ACCJC enumerates guidelines for everything from financial planning integration to elements of a college’s mission statement, the reliance on “qualitative opinions derived from quantitative data” (Roose & Anderson, 1972, p. 115) may render the process of self-evaluation as bias toward those institutions that have the resources and training necessary to produce or purchase rhetorically strong self-evaluation reports.

But accreditation metrics are only part of the issue. What lies beneath the ACCJC’s evolving approach to institutional effectiveness is a growing concern for data-driven, evidence-based planning and operations. When climate is fertile for cultural change and there is a force to propel it, as has been the case in California, I describe this process as institutional evolution.

The epistemological roots of this attitudinal shift in community colleges toward evidence-based planning and effectiveness has many perspectives. Eaton (2007) explained that state and national government entities combined with accreditation commissions have been increasing pressure on community colleges to develop and implement systematic approaches to academic and operational planning, institutional decision-making, and program evaluation. By adopting such frameworks, Baker & Sax (2012) emphasized that higher education leaders can improve institutional effectiveness while latently improving student achievement goals, despite compliance with increasing demands by the Accrediting Commission for Community and Junior Colleges (ACCJC) making the process of institutional evolution challenging.

Organizations have a “cultural DNA” (Schein, 2004, p. 21) that requires sustained, productive momentum for any operational or administrative evolutionary process to occur. Tierney (1988) notes that this is especially true for colleges, and that all institutions of higher learning use a similar framework with which to construct that cultural DNA. Bergquist (1992) explains this conformable framework as a set of four cultural archetypes - collegial, managerial, developmental, and negotiating. Similarly, Kezar and Eckel (2002) purport that institutional evolution requires five components: support from senior administrators, collaborative leadership throughout the college, a robust evolution plan, proper staff development, and visibility of actions and their outcomes.

This evolutionary process is described by Bailey & Alfonso (2005) as a shift from a culture of anecdotal analysis toward a culture of evidence. Institutions are encouraged to improve student achievement through the development, deployment, and assessment of empirically-based research instruments that then drive the institutional decision-making process. Just like how biological evolution is a whole-organism process, the successful integration of a metrics milieu requires full spectrum administrative and academic support, adequate enterprise resource planning, and enough time for the change process to take place.

Closing Thoughts

The current process of accreditation serves not to measure but rather to explain, and without measurement there is no objective quantification of an institution’s ability to meet its mission. How can we 1) define what an institution ought to do (i.e., what is their product), and 2) determine the best way to measure this? Contrastingly, is this an appropriate way to measure an institute of higher education?

My assumption here was that there is a desire to have a metric or even set of descriptive statistics with which a college can base their self-evaluation. This is somewhat eluded to in my field notes generated through discussions with institutional leaders. These grounded theory approaches to collecting data on accreditation preparedness has yielded three overarching themes: frustration with the inability of academic leaders to accurately predict accreditation strength; the recognition of accreditation in its current form as subjective and thus favoring those institutions that have the capacity and wherewithal to perform rigorous qualitative assessments; the desire for a wholly objective, peer-reviewed accreditation process that is separate from any one governing commission. All in all, this assumption has remained the same.

Additionally, as someone who works in research and technology in higher education, it is my assumption that all leaders are at least somewhat familiar with the data repositories that are provided to colleges and universities. It is also my assumption, however, that these repositories largely go under-utilized, as both experience and interview data has shown a disparity among the data collected and reported by institutions and the nature of college planning and program review. This alludes to an aspect of this topic that could have multiple interpretations, as what datasets should be used for what is highly debatable in many regards.

Data mining and knowledge discovery processes could help address the research challenges to accreditation planning. A central concern is whether there exists a set of aggregate quantitative data that could potentially facilitate a new accreditation preparedness metric. As I wrote before, there is a personal interest here for all those involved in institutional effectiveness because identifying a new metric for measuring an institution’s ability to meet the needs of its community as an institution of higher education would greatly enhance the accreditation planning process. In my own personal job, I end up having to rely on professional judgment in accreditation planning rather than objective measures, turning the accreditation planning process into an exercise in forensics rather than a reporting process. This is a problem in colleges across the country, with leaders from different schools often times at odds when it comes to interpreting accreditation standards. It is in this research challenge that the connection between how to plan for accreditation, what data we currently use for accreditation, and what data we should use for accreditation is imputed. My assumptions here have not changed, either.

Finally, what is an objective picture of an institution is fuzzily defined at best. While field notes have yielded conflicted opinions on what constitutes success in a higher education environment, most academic leaders have agreed that student outcomes, defined here as the relative grades and performance of college students, and student success, defined here as the attainment of institutional student goals (e.g., the conferral of a degree or certificate or the successful transfer to a 4-year university), are paramount to judging whether a college should be accredited or not. I share this opinion and I think this perspective will aid in the discovery of how a leader’s perspectives on the accreditation process shape her or his college’s strategic planning efforts and operations. The research I have dived into during this class have only helped to strengthen this resolve.

References

Age, L. J. (2011). Grounded Theory methodology: Positivism, hermeneutics, and pragmatism. The Qualitative Report, 16(6), 1599-1615.

Areen, J. (2011). Accreditation reconsidered. Iowa Law Review, 96(5), 1471–1494.

Bailey, T. R., & Alfonso, M. (2005). Paths to persistence: An analysis of research on program effectiveness at community colleges (New Agenda series Volume 6, Number 1). Indianapolis, IN: Lumina Foundation for Education.

Baker, J. H., & Sax, C. L. (2012). Building a culture of evidence: A case study of a California community college. Journal of Applied Research in the Community College, 19(2), 47-55.

Bergquist, W. H. (1992). The four cultures of the academy: Insights and strategies for improving leadership in collegiate organizations. San Francisco: Jossey-Bass.

Bloland, H. G. (1999). Creating CHEA: Building a New National Organization on Accrediting. The Journal of Higher Education, 70(4), 357–388.

Brittingham, B. (2008). An uneasy partnership: accreditation and the Federal Government. Change: The Magazine of Higher Learning, 40(5), 32–38.

California Community College Chancellor’s Office (CCCCO). (2011). California Community College Chancellor’s Office Website. Retrieved from http://www.cccco.edu

California State Auditor. (2014). California Community College Accreditation (Report 2013-123). Sacramento, CA.

Currie, K. (2009). Using survey data to assist theoretical sampling in grounded theory research. Nurse Researcher, 17(1), 24-33.

Dechter, R., & Michie, D. (1985). Structured induction of plans and programs (Technical report). IBM Scientific Center, Los Angeles, CA.

Delen, D. (2010). A comparative analysis of machine learning techniques for student retention management. Decision Support Systems, 49, 498-506.

Denzin, N., & Lincoln, Y. S. (1994). Handbook of Qualitative Research. Thousand Oaks: Sage Publications.

Douglas, D. (2003). Grounded theories of management: A methodological review. Management Research News, 26, 44-60.

Dunne, C. (2011). The place of the literature review in grounded theory research. International Journal of Social Research Methodology, 14(2), 111–124.

Eaton, J. (2007). Institutions, accreditors, and the federal government: redefining their “appropriate relationship.” Change, 39(5), 16–23.

Farrell, E. F. (2003). A common yardstick? the bush administration wants to standardize accreditation; educators say it is too complex for that. The Chronicle of Higher Education, 49(49), 25-26.

Glaser, B.G. (1992). Basics of Grounded Theory Analysis. Mill Valley: Sociology Press.

Glaser, B. G. & Strauss, A. (1967). The discovery of grounded theory: strategies for qualitative research. Chicago: Aldine Publishing Company.

Golino, H. F., Gomes, C. M. A., & Andrade, D. (2014). Predicting academic achievement of high-school students using machine learning. Psychology, 5, 2046-2057.

Goulding, C. (2002). Grounded theory: A practical guide for management, business and market researchers. Thousand Oaks: Sage Publications.

Guan, J., Nunez, W., & Welsh, J. F. (2002). Institutional strategy and information support: the role of data warehousing in higher education. Campus-Wide Information Systems, 19(5), 168 – 174.

Jones, D. P. (2002). Different perspectives on information about educational quality: Implications for the role of accreditation. Washington, D. C.: Council for Higher Education Accreditation (CHEA).

Jones, R. & Noble, G. (2007). Grounded theory and management research: A lack of integrity? Qualitative Research in Organizations and Management, 2, 84-103.

Kezar, A, & Eckel, P. D. (2002). The effect of institutional culture on change strategies in higher education. Journal of Higher Education, 73(4), 435-460.

Leef, G. C., & Burris, R. D. (2002). Can college accreditation live up to its promise? Washington, D.C.: American Council of Trustees and Alumni.

Locke, K. (1997). Re-writing the discovery of grounded theory after 25 years? Journal of Management Inquiry, 5, 239-245.

Locke, K. (2001). Grounded theory in management research. Thousand Oaks: Sage Publications.

Luan, J. (2002). Data mining and its applications in higher education. New Directions for Institutional Research, 2002 (113), 17-36. doi:10.1002/ir.35

McClenney, K. M. (2004). Redefining quality in community colleges. Change, 36(6), 16–21.

Plummer, K. (1983). Documents of life: An introduction to the problems and literature of a humanistic method. London: Allen & Unwin.

Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, 1, 81-106.

Roose, K. D., & Anderson, C. J. (1972). A rating of graduate programs. Teachers College Record, 74(1), 115-116.

Schein, E. H. (2004). Organizational culture and leadership (3rd edition). San Francisco: Jossey-Bass.

Shibley, L. R., & Volkwein, J. F. (2002). Comparing the costs and benefits of re-accreditation processes. In Comparing the Costs and Benefits of Re-Accreditation Processes. AIR 2002 Forum Paper. Toronto, Ontario, Canada: Annual Meeting of the Association for Institutional Research.

Skolits, G. J., & Graybeal, S. Community college institutional effectiveness: perspectives of campus stakeholders. Community College Review, 34(4), 302-323.

Smaling, A. (2002). The argumentative quality of the qualitative research report. International Journal of Qualitative Methods, 1(3), 2-15.

Stern, P. N. (1994). Eroding grounded theory. In J. M. Morse (ed.), Critical Issues in Qualitative Research Methods (pp. 212-223). Thousand Oaks: Sage.

Strauss, A. L., & Corbin, J. (1990). Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Thousand Oaks: Sage Publications.

Thomson, S. B. (2011). Sample size and Grounded Theory. Journal of Administration and Governance, 5(1), 45-52.

Tierney, W. G. (1988). Organizational culture in higher education: Defining the essentials. Journal of Higher Education, 59(1), 2-21.

Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review of Educational Research, 45(1), 89–125.

Western Association of Schools and Colleges’ Accrediting Commission for Community and Junior Colleges (WASC-ACCJC). (2010). Western Association of Schools and Colleges’ Accrediting Commission for Community and Junior Colleges (WASC-ACCJC) Website. Retrieved from http://www.accjc.org.