The Main Pedagogical Goals of Case-Based Teaching

By Dr. Vlad Krotov

Being put simply, a teaching case is a real or fictional story about an organization, its employees, and the issues they are facing. The value of the case study method comes from two broad areas. First, using case studies in class allows students to learn more about real companies – their successes and challenges. This has significant practical value: students can apply what they learn about real companies and managers in their workplace. From a pedagogical standpoint, case studies provide illustrations of class concepts within a real-life context. Second, using case studies allows an instructor to facilitate the development of higher-order cognitive skills among students. These higher-order skills are developed by asking students to apply class material to real-world situations. The material can be applied by finding an instance of a class concept in the case to clarify a situation that the company is facing. Alternatively, management frameworks can be used for analyzing or evaluating the current state of the company, and creating recommendations based on the more prescriptive theories that students learn in class. Each of these valuable dimensions of the case study method is expanded upon in the subsequent paragraphs.

Business is more of a professional field, rather than a hard science like Physics. In professional fields, idiosyncratic knowledge about individuals and organizations is arguably more valuable than knowledge of theoretical generalizations. If this is the case, then business professionals should benefit greatly from past stories of successful (or unsuccessful) companies or managers – just like doctors can benefit from studying prior medical cases and pondering over their patients’ medical histories. Case studies often provide fairly detailed, multi-point accounts of organizations and their employees. This knowledge can help students identify problems and create solutions for their own specific organizations, just like knowledge of prior medical cases helps doctors treat their current patients.

Moreover, these idiosyncratic accounts of people and organizations are often told with the level of detail, complexity, and ambiguity that is close to real-life business scenarios. Anyone with experience working in an organization understands that important organizational problems have many contributing factors and related issues. It is often hard to narrow down a problem to a single factor and suggest a simple solution that targets that factor only. Moreover, the way various factors contribute to an important problem is often not deterministic and highly intertwined with other issues and factors. Thus, it is hard to tell with certainty which factors contribute to the problem, in what way, and to what extent. All these issues make case studies a valuable vehicle for exposing students to the real-world complexity that is often present when dealing with various organizational issues within the real world.

As mentioned earlier, addressing these highly complex and often ambiguous organizational problems requires higher-order cognitive skills, such as the ability to apply, analyze, evaluate, and create (see Figure 1). For example, answering the question of why a particular company is experiencing a decrease in market share may require performing an analysis of the industry that the company is in, using the Five Industry Forces Framework. Similarly, evaluating the company’s current situation may require organizing a number of internal and external factors. Creating and recommending a potential solution requires the ability to understand and contextualize various prescriptive management theories (e.g. pursuing strategies that utilize internal strengths for neutralizing external threats or taking advantage of external opportunities is likely to improve organizational performance). To analyze, to evaluate, and to recommend are all higher-order cognitive skills as outlined by Bloom’s Taxonomy. Case studies provide a fruitful platform for practicing and developing these important cognitive skills among students.

Figure 1. Learning Goals under Bloom’s Taxonomy

Finally, case studies give students an opportunity to learn how to apply what they learn in class to a real-world situation. Business is largely an applied field. Basic, fundamental theories in Psychology are often applied to manage people within an organization. Fundamental theories of micro and macroeconomics are applied to an organization or industry to formulate long-term strategies for an organization. Basic mathematical models are used in Accounting and Finance. Thus, being able to apply concepts or theories within real organizations for the purpose of addressing organizational problems or pursuing opportunities is an essential skill for any business professionals. Case studies allow students to do precisely that: to practice applying abstract concepts, frameworks, and theories to complex, ambiguous organizational context for the purpose of attaining positive results for the organization.

It should be noted that these important learning goals can hardly be achieved with what seems to be a more common approach to higher education: in-class lectures delivered using PowerPoint slides followed by multiple-choice exams. Multiple choice questions largely test one’s ability to remember discrete facts and not necessarily to understand a complex organizational problem in a holistic fashion. Moreover, multiple choice questions, while capable of assessing one’s ability to apply concepts, often do so in isolation from the complexity and ambiguity of the context within which these concepts are applied. Finally, there is an inherent determinism in multiple choice questions. Only one answer is the correct one. This stands in sharp contrast to the case study approach, where there is no such thing as a “hundred percent correct answer”.

But this is not to say that multiple choice questions are an inherently inferior assessment based on a case study. Multiple choice questions serve a different purpose: to assess lower level cognitive skills, such as the ability to remember facts or understand basic concepts. That is why multiple-choice assessment is so common in lower level, undergraduate, and introductory courses. Also, with some ingenuity, one can create multiple choice questions that tap into the higher-order cognitive skills. For example, calculating the current cash position of a company and selecting the right answer from the list of several choices may require performing a complex analysis of other financial statements. Similarly, case study question may test the ability of a student to remember simple facts about the company described in the case. Thus, there is a considerable overlap between the educational goals of case-based assessment and other, more basic forms of assessment.


Krotov, V. (2002). Case-Based Assessment: A Theoretical Framework and Practical Advice. Profeducation. Available from Amazon: https://www.amazon.com/Case-Based-Assessment-Theoretical-Framework-Practical-ebook/dp/B08BYW9X79

Krotov, V., & Silva, L. (2005). Case study research: Science or a literary genre?. AMCIS 2005 Proceedings, 50.

Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H. & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York: David McKay Company.

Components of an Effective Assurance of Learning (AoL) System

By Dr. Pitzel Krotova

The Assurance of Learning (AoL) Requirement

A variety of terms have been used to describe Assurance of Learning (AoL): assessment, assessment of learning outcomes, outcomes assessment, and many others. AoL or assessment can be defined as a systematic and evidence-based approach for assessing the degree to which students meet the stated learning outcomes by devising changes that improve student learning. 

An effective and sustainable AoL system is an important requirement of many accreditation standards, such as the Association to Advance Collegiate Schools of Business (AACSB) and the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC). For example, Standard 5 (p. 41) of the 2020 Guiding Principles and Standards for Business Accreditation published by AACSB requires that each school:

… uses well-documented assurance of learning (AoL) processes that include direct and indirect measures for ensuring the quality of all degree programs that are deemed in scope for accreditation purposes. The results of the school’s AoL work leads to curricular and process improvements.

Similarly, Section 8 of the Principles of Accreditation: Foundations for Quality Enhancement published by SACSCOC requires that each educational institution:

…identifies, evaluates, and publishes goals and outcomes for student achievement appropriate to the institution’s mission, the nature of the students it serves, and the kinds of programs offered. The institution uses multiple measures to document student success.

Different Ways to Fail in AoL

Despite the importance of AoL for continuous quality improvement in relation to student learning and attaining and reaffirming accreditation, this is also the area where, based on the comments from many reviewers, many business schools struggle or simply fail. While each successful AoL system has characteristics that are unique to the institutional mission and the context where the system is implemented, it can be argued that an unsuccessful AoL system usually fails in one or more of the four areas listed below:

    1. The AoL system is not designed in accordance with best practices or requirements of major accreditation agencies
    2. The AoL system is not fully aligned with the institutional mission and goals
    3. The AoL system does not produce any traceable changes that result in improvement in student learning
    4. The AoL system is not sustainable; it is quickly abandoned after one or more cycles of assessment

Components of a Successful AoL System

In order to comply with the accreditation requirements and produce specific, traceable changes that improve student learning and help the institution achieve its mission and goals, an AoL system must rely on the following components:

    • Clear, collectively developed institutional mission and goals
    • A list of Institutional Learning Outcomes (ILOs)
    • A list of Program Learning Outcomes (PLOs) for each program included in assessment
    • A list of specific, measurable Course Learning Objectives (CLOs)
    • Alignment of ILOs, PLOs, and CLOs with institutional mission and goals
    • Alignment of mission, goals, and learning outcomes with appropriate accreditation standards or industry requirements
    • An AoL committee or task force comprised of representatives from various academic and nonacademic units and levels of an HEI
    • Templates outlining the general approach and technicalities of assessment and reporting
    • A clear, collectively developed and mutually agreed upon assessment plan
    • Alignment of assessment planning and reporting
    • Templates for assessment maps or matrices
    • A simple yet robust system for gathering, analyzing, and distributing AoL data
    • Availability of Assessment Coordinators to carry out assessment for individual programs, departments, and colleges
    • A variety of formative and summative measures appropriate for the stated learning outcomes
    • Clear assignment of responsibilities in relation to AoL
    • Clear deadlines for various AoL events and deliverables
    • A formal approach to assessment data analysis
    • A template for assessment reports
    • Dissemination of assessment data to all the relevant stakeholders
    • Decision making, recommendations, and changes driven by assessment data
    • Continuous improvement in student learning and success based on AoL
    • Contribution to broader societal goals through improved teaching and student learning

As one can see from the list, an effective AoL system requires many components and a holistic approach based on the collaboration of all the important stakeholders in education, such as faculty, administrators, staff, students, employers, HEIs, accreditation agencies, and society as a whole.

Risk Management for Business Schools

By Dr. Vlad Krotov and Dr. Jacob Chacko

The 2020 Business Accreditation Standards by AACSB require a business to “maintain an ongoing risk analysis, identifying potential risks that could significantly impair its ability to fulfill the school’s mission, as well as a contingency plan for mitigating these risks.” With the recent events surrounding the COVID-19 pandemic and its impact on educational institutions around the globe, there is a growing realization among business schools and their leaders of the importance and usefulness of Risk Management in their organizations.  In this article, we briefly discuss the Risk Management Process and offer simple, practical guidelines on how to identify, analyze, and mitigate risks with the help of a formal Risk Management Plan that is aligned with a broader Strategic Management Plan devised by a business school.

Simplifying Assumptions

In this article, we make a number of assumptions in relation to Risk Management (see Figure 1). We believe that these assumptions will simplify the Risk Management Process and make it more effective in mitigating the identified future risk events.

Figure 1. Risk Management Assumptions

Figure 1. Risk Management Assumptions

First, we believe that Risk Management is not a “bulletproof shield” for protecting a business school against all possible risks. It is rather a tool or a method that, if used effectively, can reduce the negative impact of risk on the organization. Risk Management can also be misused and turn into a vain exercise. This usually happens when the Risk Management process is (a) based on flawed analysis that does not properly identify and analyze important risks, (b) too complex and, thus, impractical, or (c) not backed by adequate resources required for risk mitigation. Second, we also believe that Risk Management is subjective. Risk Management is much closer to art rather than science; it is based on subjective reasoning and viewpoints, requires imagination for proper risk identification, and is heavily impacted by the “unknowns.” Because of that, we are strongly against a naïve, overly quantitative approach to Risk Management. We do support a formal, structured approach to risk analysis that makes use of appropriate quantitative and qualitative factors. Third, we believe that simplicity is the most effective response to the inherent complexity and serendipity of the environment that many business schools are operating it. We believe that overly complex, highly structured plans are inherently “fragile” in the face of the uncertain, highly complex, and turbulent environment that many business schools are increasingly finding themselves in. Simple, agile plans and structures are more robust and effective during the times of turbulence and uncertainty.

Risk Management Process

Risk Management can be defined as a continuous process comprised of the following steps or phases: analysis of strategic priorities and relevant internal and external factors, identification and definition of risk bearing events, analysis of risks based on likelihood and severity of their impact, mitigating risks by devising response strategies and actions and assigning people responsible for these actions, and monitoring of risks and periodic reporting in relation to these risks to key stakeholders (see Figure 2). Each of these steps is discussed in more detail in the sections below.

Figure 2. Risk Management Process

Analyzing Strategic Priorities and Relevant Internal and External Factors

Risk Management starts with the analysis of the current strategic priorities. As explained in Standard 1 of the 2020 Business Accreditation Standards published by AACSB, risk management is a part of a broader strategic management process and should be carried out in a way that supports a business school in attaining its strategic goals and objectives. Many of the internal and external risks can be identified by analyzing an organization’s internal strengths and weaknesses together with external opportunities and threats (the so-called SWOT analysis).

Identifying and Defining Risks

After this analysis, the organization should be able to identify and clearly describe important risk bearing events it is facing in relation to its internal and external environments. Examples of external risks include: 

    • Growing competition for students among existing educational institutions
    • Drops in enrollment due to demographics changes
    • Deficit of resources due to worsening economic conditions

Examples of internal risks include:

    • Decreases in funding due to budgeting changes at the university level
    • Inadequate staffing
    • Turnover in leadership

A table with clear descriptions of identified risks should be the main deliverable of the risk identification and definition phase.

Analyzing Risks Based on Likelihood and Impact

While all kinds of risks can and should be identified as a part of the Risk Management process, not all risks have the same estimated likelihood and potential impact. Thus, each risk should be carefully analyzed to determine (1) the likelihood of an event occurring and (2) severity of its impact (see Figure 3).

Figure 3. Risk Categories

This categorization of risks allows one to prioritize attention and resources in relation to possible future events. Events that are very likely to occur and which can possibly have a great impact on an organization should be treated as critical events. These events require special attention and resources to prevent their negative impact on the organization. Possible future events with moderate likelihood and moderate-to-high impact should be treated as important risk events. While being treated with adequate attention and resources, as a rule, these events should require less attention and resources than critical events with high likelihood and high impact.  Low likelihood events with moderate-to-high impact should require a moderate level of attention and resources. Events with moderate-to-high likelihood and low impact should be acknowledged and dealt with, but with a minimum level of resources. Finally, events with low likelihood and low impact should be discussed but probably excluded from a formal risk management plan to keep it simple.

Mitigating Risks by Devising Mitigation Actions and Assigning Responsibilities

After analyzing each possible risk event in terms of its likelihood and impact on the organization, possible actions for mitigating these risks should be devised. It is important to assign to each risk event a “risk owner”—a person responsible for taking a lead on these risk mitigation actions. More thought and extra planning should be put into critical and important events. Important organizational leaders should not be “overextended”; they should be assigned as “leads” only to critical and important risk events.

Monitoring Risks and Establishing Period Reporting to Key Stakeholders

People in charge of the specific risks should be given the formal task of monitoring the internal and external environment of a business school and carrying out mitigation actions designed to protect the organization from a possible negative impact of an event in a proactive fashion or carrying out emergency actions designed to minimize the impact of an event that has occurred already. Without a person responsible for monitoring and mitigating a potential risk event, the organization my find itself in a situation where the event is not identified or dealt with in a timely fashion. Periodic updates by people assigned to risk events should be sent to the dean. The dean can compile all of these reports in a formal Risk Management Plan update that is sent to all the key stakeholders quarterly, biannually, or annually—depending on the complexity and uncertainty of the environment that the school is operating in.

Risk Management Plan

The most important deliverable of the Risk Management process is in the form of a formal risk management plan that is updated periodically, depending on the Strategic Planning cycle length of a business school. The main elements of an effective Risk Management Plan are summarized in Table 1 below.

Strategic Goal 1 – Emphasize Faculty & Staff Development

Risk DescriptionImportanceRisk OwnerMitigation  ActionsReporting TimelineStatus Updates
Inferior instructional quality in online coursesCriticalDept. Chairs, FacultyComprehensive faculty training, audit of online classesOn-goingAll online courses have been reviewed using a standard quality rubric
Failure to attract and retain qualified facultyImportantDean, Dept. ChairsFaculty development opportunities, faculty satisfaction surveyAnnualA formal business faculty development program was established in collaboration with the Faculty Development Center
Failure to maintain appropriate portfolio of qualified facultyImportantDept. Chairs, Assoc. DeanDevelop and maintain a faculty resource planAnnualA faculty resource plan has been designed in accordance with AACSB definitions
Failure to maintain AACSB accreditationModerateDean, Assoc. DeanEnsure adherence to AACSB standards, focus on continuous improvementOn-goingFaculty sufficiency issue has been communicated to the university’s senior leaders
Table 1. Elements of a Risk Management Plan

Note that the plan contains all the outcomes or deliverables of the steps or phases of the Risk Management Process discussed above. Periodic status updates reported by the people in charge of the risk events are appended to each of the identified risks. Another important characteristic of this Risk Management Plan summary is that it is explicitly linked to Strategic Goal 1 found in the Strategic Plan of the business school.

Student Learning Outcomes Assessment

By Dr. James E. Mackin

Assessment

The assessment of student learning outcomes is probably the most fundamentally important type of assessment that an institution of higher education carries out.  By performing student learning outcomes assessments, the institution is asking:

    1. What have students learned?
    2. How can we tell what students have learned?
    3. How can we put our knowledge about what students have learned to use in providing better instruction/assessment of students?

In order to begin to address these questions, it is imperative that all of your faculty members understand how to express measurable student learning outcomes, and you will likely need to conduct numerous workshops on the topic before you can even begin to organize your student learning outcomes assessment protocols and processes.  A good starting point for the discussion can be Bloom’s Taxonomy, which basically classifies and conceptualizes the verbs that are used in student learning outcomes assessment.

Student learning outcomes assessment will need to occur in four different contexts at the institution: institutional, general education, academic program, and individual courses.  The assessments can be direct (e.g., some percentage of students performed at a satisfactory level on a multiple-choice question on a test) or indirect (e.g., information gathered from surveys), and they can be course-embedded (e.g., a presentation in a class) or they can be external to classes (e.g., results from implementation of a standardized test).  The question is, how do we put it all together to make some sense out of what we think students are actually learning at our institution?

It is very important that all learning outcomes are embedded in the planning and assessment process.  This means that a learning outcomes assessment plan should be included in each department’s strategic plan for each program that is offered by the department.  In addition, the department’s strategic plan should have departmental learning goals and objectives that are broadly connected to the learning outcomes in their program-based learning outcomes assessment plans.

A general template for the learning outcomes assessment plans is shown in the template provided below.

Normally, sections 1-3 of the template will be completed only at the beginning of the life of the relevant strategic plan or upon initial establishment of a new program, while sections 4-6 will be updated on every assessment cycle.  Also, graduate programs will not use section 1, where the program learning outcomes are linked to the institutional learning outcomes.  Graduate programs are more specialized than undergraduate programs and, although we need to ensure that they are delivering on their promises, these programs generally fall outside the realm of institutional learning outcomes.

There are a variety of tools that can be used to assess student learning outcomes in a program, depending on the outcome, and these tools will be expressed in section 2 of the template.  Examples include:

    • Course-embedded rubrics;
    • Applied learning projects;
    • Standardized tests;
    • Capstone presentations;
    • Course projects;
    • Senior surveys;
    • Employer surveys;
    • Alumni surveys;
    • Jury panel surveys;
    • External examiner surveys;
    • Pre- and post-test comparisons;
    • Selected examination questions;
    • Comprehensive exams;
    • Senior theses.

In addition, student learning outcomes will be embedded in each course in an academic program, although the nature of the outcomes and the level of student achievement expected will vary from course to course.  Section 3 of the template provides space for you to compile the level of learning outcome achievement expected for all courses in the program, including introductory, intermediate, and mastery.  Although not all courses will address all learning outcomes, the combination of all courses in the program should give students the opportunity to achieve mastery in all of the program student learning outcomes.  Ideally, lower level courses provide the opportunity for introductory achievement of learning outcomes, while intermediate and upper level courses provide opportunities for intermediate and mastery achievement of learning outcomes, respectively.

A space for the raw data that is used in assessments is not included in the template; however, that data should be stored in easily accessible locations.  Again, as in the case of strategic planning, learning outcomes assessment typically involves massive amounts of data, and it would behoove the institution to invest in software that is specifically designed for outcomes assessments.

For undergraduate programs, every context of learning outcomes assessment should be connected to every other context.  This is the reason that the template includes the section (section 1) where program learning outcomes are related to the institutional learning outcomes.  Not all institutional learning outcomes will be addressed at all levels (introductory, intermediate, mastery) by an individual major program, but the combination of the major program, the general education program and any additional graduation requirements must address the institutional learning outcomes at the mastery level for every student.  This requirement is one reason that many institutions have chosen to address their institutional learning outcomes completely through the general education program.

Because the program learning outcomes are embedded in the strategic planning process, all program learning outcomes will be carried through the review cycle shown in Figure 1.

Figure 1. An Idealized Planning and Assessment Review Sequence.
In this sequence, the academic department plans are reviewed by a group of department chairs/heads in a school/college and then the plans are reviewed by the relevant dean, who incorporates appropriate changes into the school/college plan. By the same token, the school/college plans are reviewed by Deans Council and then by the Provost for incorporation into the Academic Affairs Plan. The institutional oversight committee (in this case, the Budget and Planning Committee) reviews all plans on an annual basis.

A faculty subset of the institution-wide committee (the “Budget and Planning Committee” depicted in Figure1), or perhaps another committee, should then be tasked with reviewing the status of the institution with respect to the institutional student learning outcomes.  The data for this review would come from the information submitted for the program student learning outcomes assessments. 

If there are additional graduation requirements that lie outside the major and general education programs, the learning outcomes assessment data for those requirements will need to be collected and analyzed. It is possible to address the student learning outcomes for the general education program using the template.  However, it is important to recognize that the general education program is a unique concept in the sense that, at most institutions, there is no single entity, like an academic department, that oversees general education.  It is for this reason that most institutions have the equivalent of a “General Education Committee” and many have a “General Education Assessment Committee” that deals with general education for the institution as a whole.  These committees consist of faculty members from across all the areas encompassed by the general education program, and they usually also include accountable administrators.  As a general rule, the General Education Committee will be the originator of the program (i.e., the committee that approves all new requirements and any changes needed in the program), while the General Education Assessment Committee will be responsible for program review.  Because the responsibilities of the two types of committees are intimately related – one creates and the other reviews – it is often desirable to combine the two into a single committee that handles both sets of responsibilities.

Reviewing the general education program and making changes (i.e., “closing the loop”) is an area where most institutions struggle, simply because of the wide variety of interests that are involved.  The situation is simplified when courses are specifically designed for the general education program, but that scenario is rare in higher education.  More often, courses serve the dual purposes of meeting academic program requirements and general education requirements.  This is one reason that you will want to tie all of your programs, including the general education program, to the institutional learning outcomes.  That strategy gives you the flexibility that you need to essentially serve all masters, while at the same time simplifying and constraining approaches to learning outcomes assessment.

Your General Education (Assessment) Committee can come in at the end of the chain in Figure 1 and specifically review the general education learning outcomes while the program and institutional learning outcomes are also being evaluated, probably by other committees.  In principle, if all of the general education course requirements are also program requirements, it should be possible to compile the assessment information for the general education program from the information that is submitted for the academic programs.  If there are courses in the general education program that are not part of any academic program, then the assessment information will need to be solicited from the relevant course instructors, and separate forms should be developed for that purpose. 

A Simple, Robust Assurance of Learning (AoL) System for a Doctor of Business Administration (DBA) Program

By Vlad Krotov, PhD

Client

A young, large private university in the Middle East undergoing initial accreditation with WASC and AACSB.  

Client Requirements

Designing a simple and robust Assurance of Learning system for a Doctor of Business Administration (DBA) executive doctoral program offered by a large private university in the Middle East. The system had to meet the following requirements: 

a) Compliant with AACSB Standards. The College of Business where the program was offered was going through initial accreditation with AACSB; therefore, the system had to meet all the AACSB requirements in relation to AoL

b) Simple. The system had to be simple enough so that the DBA faculty could quickly understand and contribute to the continuous quality improvement program based on this AoL system. This was important to accommodate changes in the faculty roster teaching in the DBA program as well as general changes in the policies and the curriculum of this newly established DBA program

c) Reliable. The system had to produce reliable, useful results. It was important for the system to have a “pre-test” and then a “post-test” to produce meaningful results in relation to program learning goals. Also, the measurement tools had to incorporate both quantitative and qualitative results for further improvements. 

Solution

As the first step, the curriculum of the DBA program was aligned along the five Program Learning Outcomes (PLOs). The results of the curriculum alignment process are provided in the Course Alignment (CAM) matrix below (see Figure 1): 

Figure 1. Course Alignment Matrix (CAM) for the DBA Program

The extent to which Doctoral of Business Administration (DBA) students have mastered the learning outcomes of the program is assessed at 3 strategic points: METH 1 “Introduction to Business Research” course (Assessment Point 0), RSCH 1 “Research Proposal” (Assessment Point 1), and RSCH 2 “Dissertation” (Assessment Point 2) (see Figure 2 below).

Figure 2. AoL System for the DBA Program

At each of the assessment points, the Research Evaluation Rubric is used to assess student performance. The rubric relies on two other rubrics developed by the College: General Analytical Writing Assessment Rubric and Oral Presentation Rubric. In METH1 course, the basis for assessment is student “mock” dissertation proposal – an exercise where students, based on their limited knowledge of their research domain and high-level understanding of research methods, describe and present a rough plan for their possible future dissertation research. The assessment is done by the instructor teaching this course. This assessment point is used to establish a baseline for student knowledge and skills in relation to the program learning outcomes shortly after they join the DBA program. In RSCH the basis for assessment is the actual research proposal document and presentation. Finally, in RSCH 2 the basis for assessment is the final dissertation document and the final dissertation defense. In RSCH 1 and RSCH 2 assessment is done by the dissertation committee chair.  In both cases, assessment results are submitted to the Program Director for analysis. Subsequent changes in the curriculum are subjected to the standard curriculum revision process implemented by the College and presided over by the College Curriculum Committee.

Results

With this simple, robust AoL System, the college was able to “close the loop” in relation to the DBA program in just one year (see Figure 3) below: 

Figure 3. Closing the Loop

The results of the newly designed AoL system indicate a noticeable growth in the level at which doctoral students master the PLOs across semesters (see Figure 4 below). These improvements are largely a result of the recommendations for curriculum and policy changes submitted by the DBA faculty participating in the AoL process using the rubrics used in assessment. 

Figure 4. AoL Results Across Semesters

The College believes that this assessment method allows for a closer monitoring of individual students with respect to their achievement of the learning goals of the program. The AoL system was received positively by AACSB Review Team.