"Developmental education is minimally effective, at best."
- Community College Research Center National Center for Postsecondary Research Presentation
I read a posting by "Dean Dad" on the Inside Higher Education website the day it was posted and have been pondering it since. It is appended below.
While at the Higher Learning Commission Conference in Chicago this weekend, I had a moment to discuss it over a drink with Amy Penne and Kris Young. Apparently, it has been understood by some for quite a while now that assessment testing and mandatory placement, primarily in mathematics, reading, and writing has nothing to do with success (pass rates) in developmental courses.
"Students who simply disregarded the placement and went directly into college-level courses did just as well as students who did as they were told."
I don't think I can adequately communicate how much this disturbs me. Does this mean that students who score in the bottom 5% of the placement tests do as well as those who score in the top 5%, if they end up in the same class?
At Parkland College, we painstakingly and methodically assess each new student, provide a large collection of resources (Maybe some RSA Animation or Khan videos?) to assist in preparation for the assessment, counsel and advise students into a variety of programs and services congruent with national best practices.
Yet, according to the research, nothing improves pass rates more effectively than shortening (and intensifying) the instructional experience.
A few of the responses to the findings about assessment/placement testing:
1. The effect is based on the fact that those students in higher level classes are taught based on the assumption that they have forgotten everything they "learned" in the prerequisite course.
2. Many students take placement tests without understanding their purpose or high-stakes nature – they experience assessment as a “one-shot deal.”
3. "Letting students go directly into a course for which they are not prepared might improve retention and graduation rates, but that approach is likely to cause another problem. When a compassionate instructor has a lot of these students in class, he or she is likely to slow down and help them, both in and out of class. Then the class may not be able to get to all of the material in the depth it is supposed to cover. This creates an additional problem in upper division courses because the students are not ready at that point either. The problem cascades upward.
Over time, instructors are worn down and the course is watered down. The instructor spends far more time outside of class helping these students, diverting them from their other duties such as class prep and grading or service and research (the latter being a significant problem in universities where faculty have obligations other than teaching)." - A comment from a reader on Insider Higher Ed.
4. "This finding is simply a variation on the common sense idea that having contemporaneous support outside the classroom will improve learning outcomes. Guess what's another way that can happen? By paying the people who teach these courses a living wage with benefits so that they can spend an appropriate amount of time meeting with their students outside the classroom (in private offices, not at McDonald's), giving the students the academic support and attention that each student needs." - Another IHE reader
5. "I see students everyday who cannot understand the test instructions, let alone "pass" the test. I don't see how a professor could see to their needs while also meeting the expectations of the better prepared students." - Same as above
An interesting approach:
Dean Dad's posting:
A few weeks ago I promised a piece on remedial levels. It’s a huge topic, and my own expertise is badly limited. That said...
Community colleges catch a lot of flak for teaching so many sections of remedial (the preferred term now is “developmental”) math and English. (For present purposes, I’ll sidestep the politically loaded question of whether ESL should be considered developmental.) In a perfect world, every student who gets here would have been prepared well in high school, and would arrive ready to tackle college-level work.
This is not a perfect world. And given the realities of the K-12 system, especially in low-income areas, I will not hold my breath for that.
Many four-year colleges and universities simply exclude the issue by having selective admissions. Swarthmore doesn’t worry itself overly much about developmental math; if you need a lot of help, you just don’t get in. But community colleges are open-admissions by mission; we don’t have the option to outsource the problem. We’re where the problem gets outsourced.
I was surprised, when I entered the cc world, to discover that course levels and pass rates are positively correlated; the ‘higher’ the course content, the higher the pass rate. Basic arithmetic -- the lowest level developmental math we teach -- has a lower pass rate than calculus. The same holds in English, if to a lesser degree.
At the League for Innovation conference a few weeks ago, some folks from the Community College Research Center presented some pretty compelling research that suggested several things. First, it found zero predictive validity in the placement tests that sentence students to developmental classes. Students who simply disregarded the placement and went directly into college-level courses did just as well as students who did as they were told. We’ve found something similar on my own campus. Last year, in an attempt to see if our “cut scores” were right, I asked the IR office and a math professor to see if there was a natural cliff in the placement test scores that would suggest the right levels for placing students into the various levels of developmental math. I had assumed that higher scores on the test would correlate with higher pass rates, and that the gently-slanting line would turn vertical at some discrete point. We could put the cutoff at that point, and thereby maximize the effectiveness of our program.
It didn’t work. Not only was there no discrete dropoff; there was no correlation at all between test scores and course performance. None. Zero. The placement test offered precisely zero predictive power.
Second, the CCRC found that the single strongest predictor of student success that’s actually under the college’s control -- so I’m ignoring gender and income of student, since we take all comers -- is length of sequence. The shorter the sequence, the better they do. The worst thing you can do, from a student success perspective, is to address perceived student deficits by adding more layers of remediation. If anything, you need to prune levels. Each new level provides a new ‘exit point’ -- the goal should be to minimize the exit points.
I’m excited about these findings, since they explain a few things and suggest an actual path for action.
Proprietary U did almost no remediation, despite recruiting a student body broadly comparable to a typical community college. At the time, I recall regarding that policy decision pretty cynically, especially since I had to teach some of those first semester students. Yet despite bringing in students who were palpably unprepared, it managed a graduation rate far higher than the nearby community colleges.
I’m beginning to think they were onto something.
This week I saw a
webinar by Complete College America that made many of the same points, but that suggested a “co-requisite” strategy for developmental. In other words, it suggested having students take developmental English alongside English 101, and using the developmental class to address issues in 101 as they arise. It would require reconceiving the developmental classes as something closer to self-paced troubleshooting, but that may not be a bad thing. At least that way students will perceive a need for the material as they encounter it. It’s much easier to get student buy-in when the problem to solve is immediate. In a sense, it’s a variation on the ‘immersion’ approach to learning a language. You don’t learn a language by studying it in small chunks for a few hours a week. You learn a language by swimming in it. If the students need to learn math, let them swim in it; when they have what they need, let them get out of the pool.
I’ve had too many conversations with students who’ve told me earnestly that they don’t want to spend money and time on courses that “don’t count.” If they go in with a bad attitude, uninspired performance shouldn’t be surprising. Yes, extraordinary teacherly charisma can help, but I can’t scale that. Curricular change can scale.
This may seem pretty inside-baseball, but from the perspective of someone who’s tired of beating his head against the wall trying to improve student success rates without lowering standards, these findings offer real hope. It may be that the issue isn’t that we’re doing developmental wrong; the issue is that we’re doing it at all.
There’s real risk in moving away from an established pattern of doing things. As Galbraith noted fifty years ago, if you fail with the conventional approach, nobody holds it against you; if you fail with something novel, you’re considered an idiot. The “add-yet-another-level” model of developmental ed is well-established, with a legible logic of its own. But the failures of the existing model are just inexcusable. Assuming three levels of remediation with fifty percent pass rates at each -- which is pretty close to what we have -- only about 13 percent of the students who start at the lowest level will ever even reach the 101 level. An 87 percent dropout rate suggests that the argument for trying something different is pretty strong.
Wise and worldly readers, have you had experience with compressing or eliminating developmental levels? If so, did it work?
This is the part of the show where you talk. See all that empty space below? That is the space where you tell me (and everyone else) what you think.
(12,750)