What is indirect assessment?

By Dr. Vlad Krotov

The term indirect assessment refers to methods of evaluating student learning and outcomes based on perceptions, reflections, and other methods that do not directly assess student knowledge or skills. Typical indirect assessment tools include alumni surveys, interviews with graduating students, employer focus groups, reviews of students’ reflective essays, etc. Indirect assessment is designed to gather data on student experiences, satisfaction, and perceived learning outside the traditional classroom environment, to better understand the effectiveness of educational programs and practices.

AACSB standards emphasize the importance of both direct and indirect assessment in demonstrating a school’s commitment to excellence. There are several reasons why indirect assessment is important for AACSB and other accreditation agencies: 

    • Comprehensive Evaluation: Indirect assessment complements direct assessment methods (such as exams, projects, and practical demonstrations) by providing a more holistic view of student learning and program effectiveness.

    • Student Feedback: indirect assessment captures students’ perspectives on their learning experiences, which can highlight strengths and weaknesses in the curriculum, teaching methods, and overall educational environment.

    • Stakeholder Engagement: Indirect assessment involves engaging various stakeholders, including students, alumni, and employers, which can strengthen the connection between the school and its broader community. This engagement can provide valuable feedback for aligning educational programs with the needs and expectations of these stakeholders.

In summary, indirect assessment is a crucial component of accreditation because it helps ensure that business schools are not only achieving their educational goals but also continuously improving and meeting the needs of their students and other stakeholders.

Components of an Effective Assurance of Learning (AoL) System

By Dr. Pitzel Krotova

The Assurance of Learning (AoL) Requirement

A variety of terms have been used to describe Assurance of Learning (AoL): assessment, assessment of learning outcomes, outcomes assessment, and many others. AoL or assessment can be defined as a systematic and evidence-based approach for assessing the degree to which students meet the stated learning outcomes by devising changes that improve student learning. 

An effective and sustainable AoL system is an important requirement of many accreditation standards, such as the Association to Advance Collegiate Schools of Business (AACSB) and the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC). For example, Standard 5 (p. 41) of the 2020 Guiding Principles and Standards for Business Accreditation published by AACSB requires that each school:

… uses well-documented assurance of learning (AoL) processes that include direct and indirect measures for ensuring the quality of all degree programs that are deemed in scope for accreditation purposes. The results of the school’s AoL work leads to curricular and process improvements.

Similarly, Section 8 of the Principles of Accreditation: Foundations for Quality Enhancement published by SACSCOC requires that each educational institution:

…identifies, evaluates, and publishes goals and outcomes for student achievement appropriate to the institution’s mission, the nature of the students it serves, and the kinds of programs offered. The institution uses multiple measures to document student success.

Different Ways to Fail in AoL

Despite the importance of AoL for continuous quality improvement in relation to student learning and attaining and reaffirming accreditation, this is also the area where, based on the comments from many reviewers, many business schools struggle or simply fail. While each successful AoL system has characteristics that are unique to the institutional mission and the context where the system is implemented, it can be argued that an unsuccessful AoL system usually fails in one or more of the four areas listed below:

    1. The AoL system is not designed in accordance with best practices or requirements of major accreditation agencies
    2. The AoL system is not fully aligned with the institutional mission and goals
    3. The AoL system does not produce any traceable changes that result in improvement in student learning
    4. The AoL system is not sustainable; it is quickly abandoned after one or more cycles of assessment

Components of a Successful AoL System

In order to comply with the accreditation requirements and produce specific, traceable changes that improve student learning and help the institution achieve its mission and goals, an AoL system must rely on the following components:

    • Clear, collectively developed institutional mission and goals
    • A list of Institutional Learning Outcomes (ILOs)
    • A list of Program Learning Outcomes (PLOs) for each program included in assessment
    • A list of specific, measurable Course Learning Objectives (CLOs)
    • Alignment of ILOs, PLOs, and CLOs with institutional mission and goals
    • Alignment of mission, goals, and learning outcomes with appropriate accreditation standards or industry requirements
    • An AoL committee or task force comprised of representatives from various academic and nonacademic units and levels of an HEI
    • Templates outlining the general approach and technicalities of assessment and reporting
    • A clear, collectively developed and mutually agreed upon assessment plan
    • Alignment of assessment planning and reporting
    • Templates for assessment maps or matrices
    • A simple yet robust system for gathering, analyzing, and distributing AoL data
    • Availability of Assessment Coordinators to carry out assessment for individual programs, departments, and colleges
    • A variety of formative and summative measures appropriate for the stated learning outcomes
    • Clear assignment of responsibilities in relation to AoL
    • Clear deadlines for various AoL events and deliverables
    • A formal approach to assessment data analysis
    • A template for assessment reports
    • Dissemination of assessment data to all the relevant stakeholders
    • Decision making, recommendations, and changes driven by assessment data
    • Continuous improvement in student learning and success based on AoL
    • Contribution to broader societal goals through improved teaching and student learning

As one can see from the list, an effective AoL system requires many components and a holistic approach based on the collaboration of all the important stakeholders in education, such as faculty, administrators, staff, students, employers, HEIs, accreditation agencies, and society as a whole.

Student Learning Outcomes Assessment

By Dr. James E. Mackin

Assessment

The assessment of student learning outcomes is probably the most fundamentally important type of assessment that an institution of higher education carries out.  By performing student learning outcomes assessments, the institution is asking:

    1. What have students learned?
    2. How can we tell what students have learned?
    3. How can we put our knowledge about what students have learned to use in providing better instruction/assessment of students?

In order to begin to address these questions, it is imperative that all of your faculty members understand how to express measurable student learning outcomes, and you will likely need to conduct numerous workshops on the topic before you can even begin to organize your student learning outcomes assessment protocols and processes.  A good starting point for the discussion can be Bloom’s Taxonomy, which basically classifies and conceptualizes the verbs that are used in student learning outcomes assessment.

Student learning outcomes assessment will need to occur in four different contexts at the institution: institutional, general education, academic program, and individual courses.  The assessments can be direct (e.g., some percentage of students performed at a satisfactory level on a multiple-choice question on a test) or indirect (e.g., information gathered from surveys), and they can be course-embedded (e.g., a presentation in a class) or they can be external to classes (e.g., results from implementation of a standardized test).  The question is, how do we put it all together to make some sense out of what we think students are actually learning at our institution?

It is very important that all learning outcomes are embedded in the planning and assessment process.  This means that a learning outcomes assessment plan should be included in each department’s strategic plan for each program that is offered by the department.  In addition, the department’s strategic plan should have departmental learning goals and objectives that are broadly connected to the learning outcomes in their program-based learning outcomes assessment plans.

A general template for the learning outcomes assessment plans is shown in the template provided below.

Normally, sections 1-3 of the template will be completed only at the beginning of the life of the relevant strategic plan or upon initial establishment of a new program, while sections 4-6 will be updated on every assessment cycle.  Also, graduate programs will not use section 1, where the program learning outcomes are linked to the institutional learning outcomes.  Graduate programs are more specialized than undergraduate programs and, although we need to ensure that they are delivering on their promises, these programs generally fall outside the realm of institutional learning outcomes.

There are a variety of tools that can be used to assess student learning outcomes in a program, depending on the outcome, and these tools will be expressed in section 2 of the template.  Examples include:

    • Course-embedded rubrics;
    • Applied learning projects;
    • Standardized tests;
    • Capstone presentations;
    • Course projects;
    • Senior surveys;
    • Employer surveys;
    • Alumni surveys;
    • Jury panel surveys;
    • External examiner surveys;
    • Pre- and post-test comparisons;
    • Selected examination questions;
    • Comprehensive exams;
    • Senior theses.

In addition, student learning outcomes will be embedded in each course in an academic program, although the nature of the outcomes and the level of student achievement expected will vary from course to course.  Section 3 of the template provides space for you to compile the level of learning outcome achievement expected for all courses in the program, including introductory, intermediate, and mastery.  Although not all courses will address all learning outcomes, the combination of all courses in the program should give students the opportunity to achieve mastery in all of the program student learning outcomes.  Ideally, lower level courses provide the opportunity for introductory achievement of learning outcomes, while intermediate and upper level courses provide opportunities for intermediate and mastery achievement of learning outcomes, respectively.

A space for the raw data that is used in assessments is not included in the template; however, that data should be stored in easily accessible locations.  Again, as in the case of strategic planning, learning outcomes assessment typically involves massive amounts of data, and it would behoove the institution to invest in software that is specifically designed for outcomes assessments.

For undergraduate programs, every context of learning outcomes assessment should be connected to every other context.  This is the reason that the template includes the section (section 1) where program learning outcomes are related to the institutional learning outcomes.  Not all institutional learning outcomes will be addressed at all levels (introductory, intermediate, mastery) by an individual major program, but the combination of the major program, the general education program and any additional graduation requirements must address the institutional learning outcomes at the mastery level for every student.  This requirement is one reason that many institutions have chosen to address their institutional learning outcomes completely through the general education program.

Because the program learning outcomes are embedded in the strategic planning process, all program learning outcomes will be carried through the review cycle shown in Figure 1.

Figure 1. An Idealized Planning and Assessment Review Sequence.
In this sequence, the academic department plans are reviewed by a group of department chairs/heads in a school/college and then the plans are reviewed by the relevant dean, who incorporates appropriate changes into the school/college plan. By the same token, the school/college plans are reviewed by Deans Council and then by the Provost for incorporation into the Academic Affairs Plan. The institutional oversight committee (in this case, the Budget and Planning Committee) reviews all plans on an annual basis.

A faculty subset of the institution-wide committee (the “Budget and Planning Committee” depicted in Figure1), or perhaps another committee, should then be tasked with reviewing the status of the institution with respect to the institutional student learning outcomes.  The data for this review would come from the information submitted for the program student learning outcomes assessments. 

If there are additional graduation requirements that lie outside the major and general education programs, the learning outcomes assessment data for those requirements will need to be collected and analyzed. It is possible to address the student learning outcomes for the general education program using the template.  However, it is important to recognize that the general education program is a unique concept in the sense that, at most institutions, there is no single entity, like an academic department, that oversees general education.  It is for this reason that most institutions have the equivalent of a “General Education Committee” and many have a “General Education Assessment Committee” that deals with general education for the institution as a whole.  These committees consist of faculty members from across all the areas encompassed by the general education program, and they usually also include accountable administrators.  As a general rule, the General Education Committee will be the originator of the program (i.e., the committee that approves all new requirements and any changes needed in the program), while the General Education Assessment Committee will be responsible for program review.  Because the responsibilities of the two types of committees are intimately related – one creates and the other reviews – it is often desirable to combine the two into a single committee that handles both sets of responsibilities.

Reviewing the general education program and making changes (i.e., “closing the loop”) is an area where most institutions struggle, simply because of the wide variety of interests that are involved.  The situation is simplified when courses are specifically designed for the general education program, but that scenario is rare in higher education.  More often, courses serve the dual purposes of meeting academic program requirements and general education requirements.  This is one reason that you will want to tie all of your programs, including the general education program, to the institutional learning outcomes.  That strategy gives you the flexibility that you need to essentially serve all masters, while at the same time simplifying and constraining approaches to learning outcomes assessment.

Your General Education (Assessment) Committee can come in at the end of the chain in Figure 1 and specifically review the general education learning outcomes while the program and institutional learning outcomes are also being evaluated, probably by other committees.  In principle, if all of the general education course requirements are also program requirements, it should be possible to compile the assessment information for the general education program from the information that is submitted for the academic programs.  If there are courses in the general education program that are not part of any academic program, then the assessment information will need to be solicited from the relevant course instructors, and separate forms should be developed for that purpose. 

A Simple, Robust Assurance of Learning (AoL) System for a Doctor of Business Administration (DBA) Program

By Vlad Krotov, PhD

Client

A young, large private university in the Middle East undergoing initial accreditation with WASC and AACSB.  

Client Requirements

Designing a simple and robust Assurance of Learning system for a Doctor of Business Administration (DBA) executive doctoral program offered by a large private university in the Middle East. The system had to meet the following requirements: 

a) Compliant with AACSB Standards. The College of Business where the program was offered was going through initial accreditation with AACSB; therefore, the system had to meet all the AACSB requirements in relation to AoL

b) Simple. The system had to be simple enough so that the DBA faculty could quickly understand and contribute to the continuous quality improvement program based on this AoL system. This was important to accommodate changes in the faculty roster teaching in the DBA program as well as general changes in the policies and the curriculum of this newly established DBA program

c) Reliable. The system had to produce reliable, useful results. It was important for the system to have a “pre-test” and then a “post-test” to produce meaningful results in relation to program learning goals. Also, the measurement tools had to incorporate both quantitative and qualitative results for further improvements. 

Solution

As the first step, the curriculum of the DBA program was aligned along the five Program Learning Outcomes (PLOs). The results of the curriculum alignment process are provided in the Course Alignment (CAM) matrix below (see Figure 1): 

Figure 1. Course Alignment Matrix (CAM) for the DBA Program

The extent to which Doctoral of Business Administration (DBA) students have mastered the learning outcomes of the program is assessed at 3 strategic points: METH 1 “Introduction to Business Research” course (Assessment Point 0), RSCH 1 “Research Proposal” (Assessment Point 1), and RSCH 2 “Dissertation” (Assessment Point 2) (see Figure 2 below).

Figure 2. AoL System for the DBA Program

At each of the assessment points, the Research Evaluation Rubric is used to assess student performance. The rubric relies on two other rubrics developed by the College: General Analytical Writing Assessment Rubric and Oral Presentation Rubric. In METH1 course, the basis for assessment is student “mock” dissertation proposal – an exercise where students, based on their limited knowledge of their research domain and high-level understanding of research methods, describe and present a rough plan for their possible future dissertation research. The assessment is done by the instructor teaching this course. This assessment point is used to establish a baseline for student knowledge and skills in relation to the program learning outcomes shortly after they join the DBA program. In RSCH the basis for assessment is the actual research proposal document and presentation. Finally, in RSCH 2 the basis for assessment is the final dissertation document and the final dissertation defense. In RSCH 1 and RSCH 2 assessment is done by the dissertation committee chair.  In both cases, assessment results are submitted to the Program Director for analysis. Subsequent changes in the curriculum are subjected to the standard curriculum revision process implemented by the College and presided over by the College Curriculum Committee.

Results

With this simple, robust AoL System, the college was able to “close the loop” in relation to the DBA program in just one year (see Figure 3) below: 

Figure 3. Closing the Loop

The results of the newly designed AoL system indicate a noticeable growth in the level at which doctoral students master the PLOs across semesters (see Figure 4 below). These improvements are largely a result of the recommendations for curriculum and policy changes submitted by the DBA faculty participating in the AoL process using the rubrics used in assessment. 

Figure 4. AoL Results Across Semesters

The College believes that this assessment method allows for a closer monitoring of individual students with respect to their achievement of the learning goals of the program. The AoL system was received positively by AACSB Review Team.