Skip Navigation

Student Experience Survey

Skip Side Navigation

Student Experience Survey

The Teaching Evaluation Working Group (TEWG) has been working since Fall 2019 on designing a new system for evaluating teaching at OU.  The goal is to create a culture that encourages development of teaching skills and rewarding the use of evidence-based effective teaching practices.  As part of this effort, we have developed a new student survey, the Student Experience Survey (SES).

The Purposes of the Annual Evaluation of Teaching

  1. To provide actionable feedback to instructors to help them understand and improve the effectiveness of their teaching methods, materials, and activities.
  2. To assess instructors for purposes of evaluation, including but not limited to tenure, promotion, annual faculty evaluations, teaching awards, continuing a teaching contract, etc.

The Student Experience Survey

Feedback from students is crucial to the development and continual improvement of pedagogy, materials, and content in a course. TEWG has created a new kind of mechanism for gathering student feedback, called the Student Experience Survey, or SES. The SES provides students with an avenue of expression to faculty where they may articulate their experience in classes, including their impressions of what went well and what did not. 

The SES was designed with both of the annual evaluation of teaching purposes in mind, as well as to provide students with a mechanism to reflect on their experiences in the class and offer feedback to faculty.

Instructor FAQ

Yes. When you log in to https://ses.ou.edu and use the dropdown menu next to your name, you will find a link allowing you to download the SES instrument questions.

The SES will be the end-of-course survey for all Norman Campus courses beginning in Spring 2022.

Problems identified with the previously used Student Teaching Evaluation in annual faculty evaluation of teaching

Based on evidence gathered in articles such as (link) and (link):

  1. STE data provides little in the way of helpful, actionable feedback to instructors.
  2. Reams of data show that STE ratings are subject to bias of all sorts.
  3. Numerical scores are subject to all sorts of misuse.
  4. Traditional STEs are questionable at best as a measure of student learning.

Many of the questions are specific and ask about aspects of the course that instructors have direct control over. Students are given the opportunity to respond in text boxes in even greater detail in each section of the survey, if they so choose. (Purpose 1)

Questions are designed primarily to ask students about their experiences in class, rather than to assess the quality of the instructor or instruction. This leaves it to those with expertise in the subject area and in pedagogy to draw inferences about teaching effectiveness and quality, where appropriate. (Purposes 1 & 2)

Open text boxes provide students the opportunity to provide more detailed feedback. These text boxes are framed by the previous questions and the text box prompts so as to maximize relevant and useful responses. (Purpose 2)

The TEWG believes that student feedback is important and should play an indirect role in administrative assessment of teaching. Student feedback is an element of such assessment, but we encourage departments and administrators to find ways of taking such feedback into account that does not treat it as a direct assessment of teaching in and of itself.

We will be working with pilot units to suggest ways to do this in the context of their system for evaluating teaching.  (Purpose 2)

We also think that whatever instruments are used by administration for evaluation purposes should be as free from the effects of implicit (and explicit) bias as possible. The questions in the SES were crafted with concerns about implicit bias at the forefront, based on up-to-date research on the prevalence and mitigation of such bias. (Purpose 2

Models and motivations

We used insights from work and discussions at other institutions or organizations, including:

  1. University of Oregon  https://provost.uoregon.edu/revising-uos-teaching-evaluations
  2. Teaching Quality Framework at the University of Colorado and other affiliates   https://www.colorado.edu/teaching-quality-framework/about-tqf
  3. AAU report “Aligning Practice to Policies: Changing the Culture to Recognize and Reward Teaching at Research Universities”  https://doi.org/10.1187/cbe.17-02-0032
  4. National Academies: National Dialog on Transforming STEM Teaching Evaluation in Higher Education  https://www.nationalacademies.org/event/01-14-2021/national-dialogue-on-transforming-stem-teaching-evaluation-in-higher-education
  5. The TEWG collected survey data that include how faculty responded to the old STE system.  (We plan to collect data that compares faculty responses to the new SES.)

Evidence that we are on the right track

Two articles that appeared in early 2021 clearly articulate the reasons to change how we evaluate teaching and make recommendations similar to the actions TEWG has undertaken (citations below).

I. A recent meta-analysis of over 100 surveys on bias in teaching surveys produced a list of suggestions for how to use student input fairly and constructively.  The OU-Student Experience Survey aligns strongly with each of these suggestions.

  1. Contextualize evaluations as perceptions of student learning, not as a measure of actual teaching. Although they are commonly called “student evaluations of teaching,” SETs do not actually evaluate teaching. Instead, student evaluations represent a student’s perception or experiences in a course (Linse, 2017; Abrami, 2001; Arreola, 2004). Students should not, and arguably cannot, evaluate teaching. A more accurate name for these surveys would be student experience questionnaires or student perceptions of learning. When properly contextualized as feedback on experience, rather than evaluating teaching, these assessments can provide useful feedback for faculty and administrators.
  2. Be proactive about increasing the validity of the assessment by improving response rates. Administrators should be cautious with assessments based on too few responses.
  3. Administrators should interpret the results of student ratings with caution. Student evaluations are not designed to be used as a comparative metric across faculty (Franklin, 2001); rather, their purpose is to gather information about how students perceived a faculty member teaching a certain course.
  4. Restrict or eliminate the use of qualitative comments. Across all the studies in our sample, the clearest evidence of gender bias is in qualitative comments… Instead of asking for general “comments,” assessments should direct students to provide feedback on certain experiences with the course, as this may reduce irrelevant and mean comments.
  5. Administrators must not rely on student evaluations as the sole method of assessing teaching.
  6. Produce more research in interventions to reduce bias.
  7. While there are multiple dozens of articles establishing bias in student evaluations of teaching, there are very few articles that test interventions to mitigate bias. What little research exists yields some promising leads. For instance, reducing the size of the scale can mitigate gender bias (Rivera & Tilcsik, 2019). Another study that uses a randomized control trial finds that making students aware of biases can mitigate the gender gap in SETs (Peterson et al., 2019), though the evidence here is somewhat contradictory (Key & Ardoin, 2019), and anecdotally may induce a backlash effect.

II.  A recent article summarizing the literature on STEs from 1990-2020 had the following abstract:

This paper analyses the current research regarding student evaluations of courses and teaching. The article argues that student evaluations are influenced by racist, sexist and homophobic prejudices, and are biased against discipline and subject area. This paper’s findings are relevant to policymakers and academics as student evaluations are undertaken in over 16,000 higher education institutions at the end of each teaching period. The article’s purpose is to demonstrate to the higher education sector that the data informing student surveys is flawed and prejudiced against those being assessed. Evaluations have been shown to be heavily influenced by student demographics, the teaching academic’s culture and identity, and other aspects not associated with course quality or teaching effectiveness. Evaluations also include increasingly abusive comments which are mostly directed towards women and those from marginalised groups, and subsequently make student surveys a growing cause of stress and anxiety for these academics. Yet, student evaluations are used as a measure of performance and play a role in hiring, firing and promotional decisions. Student evaluations are openly prejudiced against the sector’s most underrepresented academics and they contribute to further marginalising the same groups universities declare to protect, value and are aiming to increase in their workforces.

 

 

In Summer 2020, the TEWG ran a pilot project where the SES used the feedback from both instructors and students develop the next draft of the SES.  The next pilot phase began Spring 2021.  During this pilot, 18 departments across campus opted to use the SES as their primary end-of-course survey. We gathered survey information from both students and instructors in the pilot courses. We also solicited input from other stakeholders, including the Office of Assessment, the Vice President for Diversity, Equity, and Inclusion, SGA member groups, and graduate students. The two pilot phases have led to the development of the current version of the SES.

 
  1. The Center for Faculty Excellence has developed a document to help instructors understand and process their feedback.

    Power of Their Words - Faculty
    Power of Their Word - GTAs

  2. We recommend that you talk with your departmental and disciplinary mentors/advisors/colleagues about your feedback to gain discipline-specific insight into your SES responses. 

  3. In the future, the Graduate College will also be offering a series of workshops focused on teaching that may include reading, responding to, and integrating feedback from students in your future teaching.

  4. Dr. Hong Lin, Senior Faculty Development Specialist at the CFE, is also available to meet with faculty to provide additional support.
  1. Explaining how you use the SES feedback and why it is valuable can help motivate students. 

  2. Giving them dedicated time in class to complete it encourages the highest participation rates. 

  3. You may consider giving a small amount of extra credit if a certain percentage of your students fill out the SES, e.g., “If 80% complete the SES, everyone gets 5 extra points.”  You can track the response rate during the open period at ses.ou.edu.

All instructors that are officially listed as instructor-of-record in Banner and have a percentage of responsibility of greater than 0% will be included in the survey. 

  1. Ask your chair/committee A how they will be incorporating information from the SES into instructor teaching evaluations. If they do not currently have a rubric for doing so, ask if there is a plan to develop one. There will soon be examples of such rubrics posted on the SES website to give your department a place to start the discussion.

  2. Invite TEWG to present at a faculty meeting.

Evidence shows that the responses to general questions like those are not indicative of good teaching practices and are prone to bias.  In other words, they don’t reflect anything about your actual teaching effectiveness and they reward and punish instructors for traits that have nothing to do with how good a teacher they are, which is unfair.  It is true that having such questions on a traditional survey provides a convenient and concise numerical result to put on job applications, but if it does not actually measure what it claims to, then it is misleading at best and potentially biased at worst.  We encourage you to reflect upon your SES feedback and include information in your job applications about how you used the feedback to improve your teaching.

  1. Schools and departments that are only familiar with traditional STE’s may not immediately understand the different kind of student feedback obtained by the SES. Here is a statement that you can include with your teaching materials that explains what OU is doing and why. 

    The University of Oklahoma Norman campus collects student’s feedback and perspectives about their course experiences using the Student Experience Survey (SES). The SES is not a traditional Student Teaching Evaluation (STE) and does not include quantitative evaluative metrics. Instead, the SES provides distributions of student responses to a wide range of questions and qualitative feedback that instructors can use to reflect upon, evaluate, and adjust their teaching methods and course materials. This change reflects a growing consensus, based on a wide range of empirical results, that traditional, quantitative STE’s are prone to bias and misuse.  

  2. While OU is proud to be on the leading edge of changing to more meaningful student surveys, we are far from the only university doing so.  Departments will be receiving materials from more and more applicants who are providing statements of experience and evidence of effectiveness rather than survey summaries alone.  If you articulate your experience and how you are deciding whether your course is working well, you will have a strong teaching component to your application.

  3. You may wish to consult with an experienced faculty member or a recent successful job candidate in your field about how to use the SES feedback most effectively to indicate the quality of your teaching. 

  4. In the future, the Graduate College will also be offering a series of workshops focused on teaching that may involve information about creating a teaching portfolio for job applications.

 

  1. There will soon be examples of such rubrics posted on the SES website to give your department a place to start the discussion.

  2. TEWG is planning to hold some workshops for units in the fall to discuss their annual teaching evaluation rubrics and how SES information and faculty self-reflection can be incorporated.

  3. TEWG recommends that departments not use the SES directly in annual evaluations, but instead evaluate faculty members based on their incorporation of SES feedback (and other input) toward improving their course.  Of course, other factors will also contribute to teaching evaluation.

 

Student FAQ

Feedback from students is vital to instructors to learn how their course was experienced by their intended audience. The SES is designed to solicit information from students that will help instructors to become better teachers—to learn what worked well in a particular class and what might need adjustment. It helps the university identify and reward excellent teaching as well as to recognize when instructors are in need of guidance and training. In short, it’s an opportunity for students to have a positive impact on the learning environment of themselves and their peers, and on the value of their OU degree.