In spite of the onslaught of pressure to infuse all course outlines of records with student learning outcomes (SLOs), here is one perspective as to why this may not be the most effective use of SLOs for promoting better learning. Intrinsic to this perspective is the systemwide uncertainty of what we mean when we use the term "student learning outcome."
From college to college there seems to be a lot of confusion about what SLOs are, particularly given the seemingly identical long-standing interpretation of behavioral learning objectives. (While the Internet is not the end-all of knowledge and wisdom, try Googling both and see what you get.) In general, most of us who survived teacher education learned that behavioral learning objectives, or "objectives" as used in Title 5, are what the student must eventually be capable of demonstrating by some form of evaluation. Most of us seem to agree that the utility of SLOs is to assess and improve upon learning. But to focus on the objective and its evaluation as the sole means to determine classroom learning effectiveness is somewhat like studying the moon to determine how the sun works. Studying the moon only shows us how light reflects but it does not give us the entire picture of how the sun creates light.
To illustrate, in my discipline I recently witnessed an aircraft being taxied out of a grassy area with a damaged rudder. My students must be able to properly restore that rudder to original or equal to original condition, and I evaluate their capacity to do this via an oral interview and written/practical assessments. The purpose of the objective is to define what the student must be capable of, but it really is not intended to determine to what degree learning has occurred within any given learning experience. The bottom line is the rudder needs to be fixed correctly or people die. This is the standard I evaluate prior to issuing a grade.
But if I want to better understand what learning actually occurred within the classroom I will need to study many variables other than the discipline objectives. If I want to learn how to do better and do it in a legitimate scientific way, then I will need a hypothesis which considers far more than the learning objective. This hypothesis will need to include things such as how well the students were prepared coming into the learning environment, and what facets of the learning environment itself were conducive to the learning. The list is fairly lengthy, but if I really want to validate what I am doing and discover what improvements are still needed to promote more effective learning, this kind of research is necessary.
The focus of this research is very different from that of the evaluation I would use to determine that my students won't kill someone by improperly repairing the rudder. For a learning effectiveness assessment to be of any use it must include many variables that are very specific to the course delivery in each instance of delivery. Obviously this information is not appropriately contained within the mandated course outline of record because it is so variable.
The course outline of record is intended to be a contract that needs to remain at least a little bit static to properly fulfill its role in compliance and articulation. It is of interest to note that many of the learning variables which would be likely suspects for us to assess for the purpose of improvement are fairly common across many courses. One could argue that a department or division assessment guide, coupled with some carefully developed hypotheses specific to the current learning intent, environment and student cohort, could be an effective way to assess and validate what learning occurred in this specific instance and what things could be done to promote better student success in the future.
So if we really want to be scientific about our approach to improving learning and our documentation of this journey, integrating these hypotheses into the COR probably does more to distract from good research than it helps. Because of the role that the COR plays in determining compliance, it creates confusion rather than clarity; it limits our ability to adapt our assessments to current conditions, and it tends to promote a culture of enforcement versus one of genuine interest in learning.
Given that there exists a great degree of variety in what meaning the term "SLO" represents I can't, in good conscience, declare that the above mentioned hypothesis should be called "student learning outcomes," but if SLOs are supposed to be about improving student learning, it sure seems like there is a very intimate parallel between the two. Nor would it be fair to say that student mastery evaluation cannot be one of the assessment pieces used to determine learning effectiveness. However, the intent and scope of the two assessment goals are in no way interchangeable even though they can and often do overlap.
Now, having the time to accomplish all of this while also maintaining all of my lab equipment, keeping the building clean, preparing for lessons, participating in governance, remaining accredited and a few more "ad nauseums" is fodder for an entirely different article.