The primary purpose of preparing and submitting annual assessment reports is to demonstrate how departments and schools evaluate the success of their academic programs in achieving Program Student Learning Outcomes (PSLOs) for both degree and certificate offerings. These reports provide a structured, systematic approach to gathering data, analyzing findings, and making informed improvements to the curriculum, thereby supporting high-quality education and fostering student success.
Program-level Student Learning Outcomes (PSLOs) specify the knowledge, skills, abilities, and attitudes that students should demonstrate upon completing a degree or certificate program. These outcomes guide the development of curriculum, instructional approaches, assessment methods, and program evaluations, making PSLOs essential for faculty, staff, and both current and prospective students.
For new programs, clearly defined, measurable PSLOs ensure students are prepared for academic and professional success.
For established programs, regular review of PSLOs maintains their relevance and supports the program's ongoing role in preparing students for success in their fields.
Requirements of PSLOs:
PSLOs for Specialized Accrediting Agencies
If your program is specially accredited (e.g., Education, Journalism, Engineering), you do not need to articulate PSLOs as those are already prescribed by your accrediting agencies. Specialized accrediting agencies, such as CAEP, ABET, AACSB, and others, often use slightly different terminology for what is essentially the same concept as Program Student Learning Outcomes (SLOs). These terms typically reflect their specific disciplinary focus or assessment frameworks.
Here are some of the commonly used terms across different specialized accreditors that correspond to PSLOs:
Determine the primary areas of learning that your degree or certificate program will emphasize. These may include:
Use action verbs in Bloom’s Taxonomy to describe learning outcomes. Below are examples:
Performance Indicators enhances clarity and measurability of a PSLO by specifying what students will actually do to demonstrate learning (e.g., analyze, design, justify). Without them, PSLO statements can remain vague or abstract. Performance indicators turn a PSLO from a good intention into a clear, teachable, and assessable expectation.
Each outcome should be S.M.A.R.T.:
Knowledge-based PSLO: Upon program completion, students will be able to explain key theories and principles of [specific subject] and apply them in [specific context]. Specifically, students will:
Skill-based PSLO: Students will demonstrate the ability to [perform a specific task or process] using appropriate tools and techniques in the field. Specifically, students will:
Ability-based PSLO: Students will synthesize research findings to produce a well-organized, original analysis on [specific topic]. Specifically, students will:
Attitude-based PSLO: Students will uphold professional ethics by adhering to [ethical standards or practices] in real-world situations. Specifically, students will:
Graduates will demonstrate a thorough understanding of statistical analysis, machine learning algorithms, and data visualization techniques. Specifically, students will:
Graduates will apply advanced data analysis methods using tools (e.g., Python, SPSS) to analyze complex datasets and derive actionable insights. Specifically, students will:
Graduates will design, implement, and present data-driven solutions to real-world challenges, effectively communicating their findings to both technical and non-technical audiences. Specifically, students will:
Graduates will demonstrate ethical decision-making and data privacy awareness in handling and analyzing sensitive information. Specifically, students will:
PSLOs should encompass a wide range of learning domains, from foundational knowledge to advanced skills and professional growth. This may include:
Once the PSLOs are articulated, consult with faculty and other stakeholders (e.g., industry experts, discipline-specific accrediting bodies if applicable, and student representatives) to ensure they meet the needs of all parties and are clear, realistic, and measurable.
Once finalized, incorporate the PSLOs into the program structure, course syllabi, and assessment plans. Communicate them clearly to students, faculty, and staff to ensure alignment and transparency.
Regularly review and update the PSLOs as the program develops, taking into account student performance, stakeholder feedback, and changes in the field. Incorporate mechanisms for evaluating and revising PSLOs into the program’s assessment cycle.
The Rubric for Reviewing Program-Level Student Learning Outcomes (PSLOs) will be used to provide feedback for both new and established degree and certificate programs.
This step requires programs to identify where each PSLO is addressed within required courses and other curriculum components, such as internships, licensure exams, and culminating experiences (e.g., comprehensive exams, theses, dissertations). Programs must also specify the assessments used to measure students’ achievement of the PSLOs. Documentation of assessment methods is required for all programs, both new and established.
A curriculum map is a visual tool that shows how course level learning outcomes align with program level and sometimes institutional learning outcomes, indicating where students are introduced to key skills, where those skills are reinforced, and where mastery is expected. Ideally, assessment should be conducted at each stage. Its importance lies in ensuring alignment across the curriculum, identifying gaps or redundancies, supporting valid and sustainable assessment processes, strengthening coherence in student learning, fostering productive faculty communication, and providing clear evidence for continuous improvement.
Programs are encouraged to develop a basic curriculum map where required courses are mapped to one or more Program-Level Student Learning Outcomes (PSLOs), reflecting alignment of the degree to which each course addresses specific aspects of the SLOs. For example, some courses are designated as introducing an SLO, while others reinforce that SLO after it has been initially introduced in an earlier course.
Alternatively, programs may create a more comprehensive curriculum map, illustrating how the content of required courses aligns with Program Student Learning Outcomes (PSLOs), the instructional activities employed, and, most importantly, the assessments used to measure student learning.
Assessment methods describe how each student learning outcome is measured. There are two types of assessment measures (direct and indirect).
1) Direct Measures AND Number of Students Assessed: This section requires description of direct measure(s) used to assess students’ progress toward achievement of each learning outcome. Direct measures require students to demonstrate acquired knowledge and skills. Below are examples.
*These measures are suitable for assessing graduate level learning outcomes. Focus on assessment of graduate learning outcomes should be on formative assessments that prepare students for culminating experiences (e.g., comprehensive or general examinations, thesis and thesis defense, dissertation and dissertation defense).
Each student learning outcome should be assessed using a direct measure as that is the only way to determine the extent to which students are able to demonstrate knowledge, skills and abilities they have acquired in a course or program. Please remember to indicate the total number of students assessed using a given measure.
2) Indirect Measures AND Number of Students Assessed: Indirect measures are commonly used to seek student opinions regarding knowledge and skills acquired in the program. Findings from indirect measures should be used to augment those of direct measures. Examples include surveys, focus groups and interviews. Please include the total number of students participating in each activity.
3) Performance Target (a.k.a. Criteria for Success): This refers to the desired level of performance faculty want to see, based on a measure of method of assessment, that represents success at achieving a given student learning outcome.
Additional Guidelines/Recommendations regarding assessment measures:
Examples of well documented Assessment Methods or Measures:
Direct Measure(s) AND Number of Students Assessed:
Assessment of senior Capstone research papers (N=30) by a faculty panel using a locally constructed 5-point rating scale where 5=“excellent,” 4=“good,” and 3=“satisfactory.” 85% should score satisfactory or better, and the mean score should reflect a better than satisfactory performance for the graduating class.
(Undergraduate – Dept. of Communication)
Indirect Measure(s) AND Number of Students Completing Surveys:
Graduating seniors (N=30) will be surveyed to gather their perceptions on basic knowledge and background in the communication discipline using a 5-point Likert scale, where 1=very well, 2=fairly well, 3=somewhat well, 4=not very well, and 5=not at all. 80% of respondents will rate their basic knowledge and background in the communication discipline at “very well” or fairly well”.
(Undergraduate – Dept. of Communication)*
______________________________
*Statement was revised for accuracy.
This step focuses on documentation of results of the analysis of assessment data to indicate how students actually performed in each learning outcome based upon the assessment methods faculty selected for each outcome. The following key questions can be used to guide analysis of data:
It is very important to analyze assessment results in order to learn whether or not the criteria on the student learning outcomes were met. Analysis of data may provide important information regarding relationship between assessment outcomes and relevant program indicators such as course grades. Further, department faculty may be able to find out the extent to which students change overtime and/or whether or not students meet specified program expectations.
Analyzing data should include organizing, synthesizing, interrelating, comparing, and presenting the assessment results. These processes should be based upon the nature of the assessment questions asked, the types of data that are available, as well as the needs and wants of the faculty, students and the whole university community including stakeholders. Since the outcome of data analysis lends itself to multiple interpretations, it may be critical to work in conjunction with others in looking through the analyzed data as this will lead to greater understanding often via different perspectives.
Data can be compared to results from previous assessments, baseline data, existing criteria/standards, etc. For instance, department faculty may be interested in finding out if their majors learned or developed more as a result of participating in a course or program than students who did not participate.
If a program has no student enrollments during the assessment cycle Because it is new and/or inactive), it should explicitly state the following in the “Assessment Results” sections: “No students were enrolled during this assessment cycle; therefore, no assessment data were collected.”
The express purpose of assessment is to continuously improve student learning. In light of this, it is very important that assessment results are analyzed, interpreted, reflected upon, and most importantly, used by faculty to make programmatic changes in the context of continuous improvement. The assessment process cannot be complete without “closing the loop” – that is, using assessment results for program change and improvement - this is the most important part of the assessment process.
The key questions regarding this step: How do faculty intend to enhance student learning based on results of student performance? Where are students performing well (as expected)? Where are they falling short? Which areas of the curriculum should be emphasized more?
Assessment results can be used in a variety of ways including, but not limited to the following:
If a program has no enrollments (because it is new and/or inactive) and therefore no assessment data, it should state the following in the “Use of Assessment Results” sections: “No students were enrolled during this assessment cycle; therefore, no assessment data were available to inform program improvements.”