Why Your Assurance of Learning System Looks Good on Paper But Fails in Practice

By Dr. Vlad Krotov

Many business schools have an illusion of an effective Assurance of Learning (AoL) system. On paper, everything looks perfect: clearly stated learning outcomes, well-designed rubrics, defined targets, and annual reports that appear thorough and complete. During accreditation reviews, these systems are often presented as evidence of a mature, well-functioning continuous improvement process.

However, experienced Peer Review Team (PRT) members can usually see through this very quickly. In fact, one of the most common questions they ask during visits is deceptively simple: “Can you give us specific examples of improvements that resulted from your AoL data?”

This question can separate AoL systems that work from systems that merely exist. When faculty struggle to provide clear, concrete examples, it becomes evident that the AoL process is not truly driving improvement. And yet, in many cases, the documentation still looks strong.

This is where the disconnect becomes clear. Many AoL systems are well-designed on paper, but they fail to function effectively in practice. Faculty experience AoL as an administrative burden rather than an academic tool. Committees meet irregularly. Reports are completed, filed, and rarely revisited. Most importantly, the system does not consistently lead to real, tangible improvements in student learning.

Typical Mistakes in Assurance of Learning

The gap between what exists on paper and what happens in reality is where most AoL systems begin to fail. This usually happens due to the mistakes discussed below. 

Mistake 1: Too Many Outcomes, Too Little Focus

One of the most common design issues is simple: schools try to assess too much.

Programs often define:

  • 8–12 Program Learning Outcomes (PLOs) 
  • Multiple measures for each outcome with sub-criteria 
  • Several layers of lengthy rubrics 

While this may appear comprehensive, it creates a system that is difficult to manage and even harder to interpret. Faculty are asked to assess too many things at once, often across multiple courses and time periods.

The result is predictable:

  • Assessment becomes superficial rather than meaningful
  • Data is collected, but not deeply analyzed
  • Faculty disengage due to growing workload

Also, more data does not lead to better decisions. In fact, it often obscures the insights that matter most.

Effective AoL systems are selective. They focus on fewer outcomes, measured well, rather than many outcomes measured poorly.

Mistake 2: Measuring the Wrong Things

Another common issue is misalignment between what is intended and what is actually measured.

In many cases, programs:

  • Assess broad goals instead of specific, measurable PLOs
  • Use rubrics that capture general performance rather than targeted competencies
  • Rely on assignments that do not directly reflect the stated outcome

This often happens in programs that must align with external standards (for example, professional or discipline-specific requirements). While alignment is important, it can unintentionally shift the focus away from the actual learning outcomes the program is supposed to assess.

The consequences are subtle but significant. The data looks structured and complete, and reports appear professional and well-organized. But the conclusions are not meaningful

In short, the system produces information, but not specific, actionable insights. 

A well-designed AoL system ensures tight alignment:

  • PLO → Assessment Method → Rubric → Target 

When this alignment is missing, the entire system becomes performative rather than informative.

Mistake 3: “Closing the Loop” as a Ritual

Most schools understand what to do when targets are not met. If only 60% of students meet a benchmark of 80%, the response is straightforward: identify the issue and implement improvements.

But what happens when targets are exceeded?

In many cases, the response is minimal:

  • “Students performed well.”
  • “No changes are needed.”
  • “We will continue current practices.”

This is where the concept of “closing the loop” quietly breaks down.

If results consistently exceed targets, important questions should still be asked:

  • Are the targets too low?
  • Is the assessment too easy?
  • Are we capturing the full depth of the outcome?
  • What exactly is working—and can it be replicated elsewhere?

Without this reflection, success becomes a stopping point rather than a learning opportunity. Closing the loop is not about reacting to failure. It is about learning from both success and failure in a systematic way.

Mistake 4: No Clear Ownership or Accountability

Many AoL systems suffer from a governance problem. Responsibility is often distributed across committees, departments, and faculty groups without clearly defined roles. While this may seem collaborative, it frequently leads to ambiguity.

Typical symptoms include the following:

  • Irregular committee meetings
  • Delays in data collection and reporting
  • Inconsistent interpretation of results
  • Loss of continuity when faculty rotate off committees

When everyone is responsible, no one is accountable.

Effective systems define:

  • Who collects the data
  • Who analyzes the results
  • Who leads discussions
  • Who ensures that improvements are implemented

Clarity of roles is not bureaucratic; it is essential for consistency and sustainability.

Mistake 5: AoL Is Not Embedded in Normal Academic Work

Perhaps the most important issue is structural. In many schools, AoL operates as a separate process:

  • Data is collected through special forms or templates
  • Faculty complete additional reporting tasks outside their teaching
  • Assessment activities are disconnected from regular coursework

This creates a system that feels artificial and burdensome. The consequences are predictable:

  • Low faculty engagement
  • Inconsistent data quality
  • Perception of AoL as “extra work”

In contrast, effective AoL systems are embedded into normal academic activities. This often includes:

  • Signature assignments used for both grading and assessment
  • Rubrics that serve both instructional and assessment purposes
  • Data collection integrated into existing systems (e.g., LMS or centralized platforms)

When AoL becomes part of what faculty already do, it stops being a burden and starts becoming useful. Faculty start viewing assessment not as an additional task but rather as an essential part of their teaching responsibilities. 

What Effective AoL Systems Do Differently

High-functioning AoL systems are not necessarily more complex. In fact, they are often simpler and more intentional. Effective AoL systems tend to share several characteristics:

  • A limited number of well-defined, meaningful PLOs
  • Strong alignment between outcomes, assessment method, rubrics, and targets
  • Embedded assessment methods that fit naturally into courses
  • Clear roles, timelines, and accountability structures
  • Regular, substantive discussions of results
  • A genuine commitment to learning from both strong and weak performance

Most importantly, these systems are designed with usability and sustainability in mind. They recognize that faculty time and attention are limited resources, and they aim to maximize impact while minimizing unnecessary complexity.

From Compliance to Continuous Improvement

Assurance of Learning was never intended to be a compliance exercise. At its core, it is about understanding whether students are achieving meaningful outcomes and using that knowledge to improve programs.

Most accreditation frameworks, such as AACSB International, emphasize continuous improvement via AoL, but the effectiveness of this process depends entirely on how the system is designed and implemented by a business school. AoL systems that are overly complex, poorly aligned, or disconnected from everyday academic work are not likely to lead to tangible improvements in student learning. Masking this failure with extensive and professionally looking documentation will not help. 

How Accreditation.Biz Can Help

The good news is that most AoL problems are not faculty problems. Most faculty members treat teaching seriously and view assessment as an important vehicle for improving the quality of education that they deliver. Most AoL problems are design problems. A well-designed AoL system should make life easier, not harder. It should provide clear insights, support meaningful discussions, and integrate seamlessly into existing academic processes.

An experienced accreditation partner such as Accreditation.Biz can help your school:

  • Simplify and streamline AoL processes and structures
  • Align PLOs, assessment methods, and rubrics effectively
  • Embed assessment into normal teaching activities
  • Establish clear roles and sustainable AoL processes
  • Avoid common design mistakes that lead to faculty burnout

The purpose behind AACSB Standard 5 is not to force business schools to do more assessment. AACSB Standard 5 requires business schools to design AoL systems that support tangible and continuous improvement in student learning. 

When “Good” Isn’t Good Enough: Continuous Improvement After You Meet AoL Targets

AACSB Standard 5 Continuous Improvement

By Dr. Vlad Krotov

Understanding the Intent of AACSB Standard 5

Under AACSB Standard 5, Assurance of Learning (AoL) is far more than reaching a certain numeric target in relation to student competencies. The standard expects every accredited business program to articulate clear and measurable program learning outcomes (PLOs), meaningful performance targets, assess those outcomes using valid and reliable instruments, analyze and discuss the results, and implement improvements based on those findings.

The final component, often referred to as “closing the loop,” is what transforms assessment from a measurement exercise into a quality enhancement system. Without documented improvement actions that follow from data analysis, the AoL cycle remains incomplete. The goal is not simply to demonstrate that learning occurred, but to show how assessment evidence systematically informs better teaching, better curriculum design, and stronger alignment with the school’s mission and strategic objectives.

When Targets Are Missed vs. When Targets Are Met

When assessment results fall below the established benchmark, the path forward is relatively clear. Faculty recognize that something may need adjustment. They may refine assignments, recalibrate rubrics, modify instructional strategies, or introduce additional student support mechanisms. In these situations, improvement feels necessary and natural.

However, a more subtle situation occurs when results meet or exceed the target. When 85%, 90%, or even 95% of students achieve the learning objective, the discussion often becomes brief. Faculty may conclude that everything is working well and that no changes are required. While this response appears reasonable, it risks undermining the spirit of continuous improvement underscoring AACSB Standard 5. Assurance of Learning is not designed to confirm adequacy; it is designed to promote ongoing enhancement.

Reflecting on Success

When targets are met, the first responsibility of faculty is thoughtful reflection. Strong results should prompt careful inquiry into their underlying causes. Some of the questions that can be asked in the light of strong results are discussed below. 

Question 1: Why Were the Results Strong?

First, the faculty analyzing results can reflect on the question of why the results were so strong. The following questions can be asked: 

  • Were there specific pedagogical approaches that contributed to improved performance? 
  • Did better alignment between course objectives and assessment instruments make expectations clearer to students? 
  • Was there greater consistency in rubric application across sections? 
  • Did curricular sequencing better prepare students for this particular competency?

Answering these questions generates valuable institutional knowledge. Instead of assuming that success will automatically continue, faculty identify the practices that produced strong outcomes and ensure they are sustained, documented, and shared. This deliberate reflection strengthens program competencies and reduces reliance on individual teaching styles alone. In doing so, the program builds a more resilient and transferable model of effective instruction.

Question 2: Was the target challenging enough? 

A second, more challenging question must also be addressed: were the set targets sufficiently demanding? If results consistently and comfortably exceed the benchmark, it may indicate that the performance standard was set too low. Targets should not be symbolic thresholds designed for easy attainment; they should represent meaningful expectations that reflect the school’s aspirations and strategic positioning.

Raising performance thresholds, refining the criteria for “exceeds expectations,” or introducing more sophisticated assessment tasks may be appropriate responses. Increasing the target does not signal dissatisfaction with students or faculty. Rather, it reflects confidence in student capability and a commitment to academic rigor and growth. Continuous improvement often requires redefining what excellence looks like.

Question 3: Are there additional opportunities for improvement? 

Even when targets are appropriately calibrated and performance is genuinely strong, opportunities for further enhancement often remain. Continuous improvement does not always mean correcting deficiencies; it can involve innovations that deepen and expand learning. Faculty might consider whether students can demonstrate more advanced integration of knowledge, stronger analytical depth, greater ethical reasoning, or more polished communication skills.

In this sense, improvement becomes developmental rather than remedial. The absence of problems does not imply the absence of growth potential. A mature AoL system encourages faculty to explore incremental refinements that gradually elevate program quality over time.

AoL Is About Improvement, Not Percentages

Ultimately, Assurance of Learning is not about achieving a particular numerical threshold. It is about cultivating a disciplined, evidence-based culture in which faculty use assessment data to continuously enhance student learning and advance the school’s mission and vision. Whether targets are missed, met, or exceeded is secondary to the central question: how does this information help us improve?

When programs treat strong results as the end of the conversation, AoL becomes a compliance exercise. When they treat strong results as an opportunity for reflection, recalibration, refinement, and innovation, they embody the true intent of AACSB Standard 5: continuous improvement. Continuous improvement is not triggered only by shortcomings. It is a permanent expectation—one that remains in force regardless of how impressive the numbers may appear.

Why Faculty Resist AOL and How to Fix This

Faculty AOL Meeting

By Dr. Vlad Krotov

AACSB Standard 5 is one of the most important accreditation standards because it makes business schools explicitly accountable for student learning. Unfortunately, in practice, Assurance of Learning (AOL) often presents the greatest challenges for most business schools undergoing accreditation.

Across business schools worldwide, AOL is the area most associated with faculty resistance, missed deadlines, incomplete data, and the infamous failure to “close the loop.”

Importantly, this resistance rarely comes from a lack of commitment to students or teaching quality. Instead, it usually reflects how AOL is designed, supported, and managed within the institution.

The most frequent causes of faculty resistance to AOL are examined below, along with—and perhaps more importantly—what actually works to get past those obstacles.

1. Faculty Don’t Understand AOL

The Problem

Many faculty members experience AOL as a confusing, jargon-heavy, compliance-driven exercise. Terms like learning goals, rubrics, direct measures, and closing the loop can feel abstract, disconnected from day-to-day teaching, and inconsistently applied across programs.

When faculty don’t clearly understand what AOL is, why AOL exists or how it improves student learning, resistance is almost inevitable.

What Works

The following can be done to equip faculty with practical AOL knowledge:

  • Invest in practical, faculty-centered AOL training
  • Focus on how AOL supports better teaching, not just accreditation
  • Provide hands-on workshops using the school’s actual courses and assignments

For example, Accreditation.Biz works directly with faculty to both train them on AOL concepts and co-develop assessment plans, rubrics, and reports, reducing confusion and anxiety.

2. Faculty Lack Time

The Problem

Faculty are already stretched thin by teaching, research, advising, and administrative work. AOL often feels like “one more unfunded mandate” added on top of an already overloaded workload. When AOL is perceived as extra work with no support, deadlines slip and enthusiasm disappears.

What Works

The following can be done to free up faculty time so that they can perform their normal duties and, at the same time, engage in AOL in a meaningful way: 

  • Hire dedicated AOL staff or assessment coordinators
  • Use consultants to handle technical and administrative tasks
  • Streamline data collection and reporting processes

Faculty should focus on academic judgment and improvement, not on spreadsheets, templates, and accreditation compliance paperwork.

3. There Are No Meaningful Rewards for AOL Work

The Problem

In many schools, faculty see AOL work as:

  • Invisible
  • Uncompensated
  • Not valued in tenure, promotion, or merit decisions

When faculty quickly realize that AOL efforts are not rewarded—financially or professionally—they deprioritize this important task. 

What Works

The following mechanism can be used to offer faculty tangible incentives for participating in AOL:

  • Create financial stipends or course releases for AOL leadership
  • Establish non-financial recognition, such as awards and public acknowledgment
  • Make AOL contributions a formal component of annual reviews, tenure, and promotion

Recognizing and rewarding AOL champions signals that assessment work truly matters.

4. AOL Doesn’t Lead to Real Change

The Problem

Perhaps the most demoralizing issue: faculty collect data year after year, but nothing changes. Reports are written, uploaded, and forgotten; there are no curricular revisions, no pedagogical innovation, and no feedback loops. This situation creates deep cynicism towards AOL. Indeed, why bother if nothing changes? 

What Works

The following can be done to make AOL a real vehicle for positive change and improvement in student learning:

  • Enforce full AOL cycles, including documented actions and follow-up assessments
  • Implement changes at course, program, department, and college levels
  • Actively communicate improvements that resulted from AOL to all the relevant stakeholders

When faculty see tangible improvements tied directly to assessment results, buy-in increases dramatically.

Additional Resistance Factors You May Be Overlooking

Beyond the “big four,” several other factors often fuel AOL resistance:

  • Fear of evaluation or blame (AOL perceived as faculty performance review)
  • Inconsistent leadership support
  • Overly complex assessment systems
  • Poor alignment between learning goals and curriculum

Each of these issues points back to system design—not faculty motivation.

Conclusion: It’s Not the Standard—and It’s Not the Faculty

AACSB Standard 5 is not the problem. Faculty commitment to teaching and learning is not the problem either.

In most cases, the real issue is an inefficient, overly complex, or poorly supported AOL system.

With the right design, tools, incentives, and leadership, AOL can:

  • Improve student learning in meaningful ways
  • Reduce faculty frustration and resistance
  • Strengthen accreditation outcomes
  • Build a sustainable culture of assessment

An experienced accreditation partner like Accreditation.Biz can design, implement, and manage an AOL system that meets AACSB Standard 5, supports faculty, and, most importantly, delivers real improvement in student learning.

When AOL works well, faculty stop resisting it and start owning it.