Placement Tests as Predictors of Student Success

Jon Tysse
Analyst, Institutional Research
Lewis & Clark Community College

In consideration of Lewis and Clark’s intention to move to a mixed-method approach for the placement of incoming students, the IR office began looking for ways to assess the different tests used to place students and how well those tests predicted success. Our full report looks at both Compass and ACT scores as predictors of student success in both developmental and college-level courses (high school GPA was unavailable). For each course studied, we included the student’s grade, Compass Test score and ACT test score and placed them into a regression model to learn the impact of the “placement test score” on the student’s final grade in the course. To the best of our ability, we ensured students’ scores placed into the models had tested directly into that level, and as a result some classes we attempted to include in the report were excluded due to low numbers (Math-16A, Math-137, Math-142, Math-145 and Engl-125). We developed a chart for the report to display how the relationships work. Below is an actual chart, representative of those you will find in the full report. The blue line in the chart represents students’ placement test scores. The green line represents the grades of students for that course. The vertical axis indicates placement score level.   
  
Table1.JPG
Lewis & Clark Community College – Office of Institutional Research


Table 1 shows the results for a regression model with students’ final grades in Math-11A as the dependent variable and their Compass algebra test score as the independent variable. The mean or average grade was 2.08 (“C”) and the mean or average Compass algebra test score was 18.94. The regression shows a weak positive correlation (r=.274, N=227) between the Compass algebra test scores and final grades, meaning as Compass algebra scores go up, final grades go up weakly. The regression model shows that the Compass algebra test explains about 7 percent (r2=.075) of the variance in students’ final grades. Combining the lack of variance explained with the weak nature of the correlation, the Compass algebra test is a weak predictor of student success in Math-11A. The end result of this study lead Lewis & Clark to move to a mixed method approach to student placement, one that prioritizes high school GPA and ACT test scores, and traditional placement tests are given as a last resort.

 

Source: Informer (CMP scores)

 
 

 Comments

 
To add a comment, Sign In
Total Comments: 9
 
Bob posted on 3/12/2014 4:41 PM
A nice clean graph, but I'm a little confused as the scale for the horizontal axis. Also, if the green line represents student's grades, then maybe the right vertical axis should display that scale.
Terry posted on 3/12/2014 4:44 PM
Highly technical material is always more easily depicted visually.
Tim posted on 3/12/2014 4:52 PM
I can see what Bob is saying and this is a slightly more advanced graphic illustration that I've typically seen for these. However, if the audience is used to this type of data and comfortable with the information, this is a clean representation of data.
Jeff posted on 3/12/2014 5:50 PM
I agree that it seems the right vertical axis is needed. Perhaps, this would be better understood if full report were being read.

The full report (see link in the text) is actually quite good, and demonstrates the value that a graph in isolation may not be nearly as valuable as a graph within a set of related graphs that tell a more complete story.
Ghenet posted on 3/12/2014 6:24 PM
The choice of the graph is good. As a viewer in order to make a quick sense of the visual depiction and see how stringent your regression analysis was I would have liked to know the p value. Dots would have made the graph less busy because each student is represented by the vertex (in the angles or zigzag anyway). The description is a little confusing in a couple of ways: the independent & dependent variable placement is not easy to understand, and the subtitle in the graph uses the term “over” which is not clear what it describes. The description under the graph “Each point …,” I would move it to the bottom (description of the table). You have a detailed description of the model at the bottom thus you can get rid of the model summary & the descriptive statistics tables.
Betsy posted on 3/13/2014 10:04 AM
The graph actually makes it look like a much stronger correlation that you say you have. It's also misleading that the "Grade" scale is plotted on the same scale as the Compass score. I'm guessing that you're plotting the 4.0 scale around the mean of the Compass score rather than using some kind of standardized score for both, as I don't see how else you'd get the grades to hug the Compass scores so closely. It's tricky to get the three dimensions: the volume of students at each point and the relationship between the Compass score and the grade.
Gail posted on 3/13/2014 11:53 AM
I agree that the graph does not truly depict the correlation results, but then how could it when you are plotting two variables with different units on the same y axis. The x axis is wasted as it looks to just be the students sorted by score. Maybe you could plot the Compass scores (x axis) vs a measure of spread in the grades (like standard deviation).
Kevin posted on 3/13/2014 1:24 PM
After staring at this graph and the example provided on the first page of the full report, I think I understand what this is trying to show. Like others, I'm confused by the x-axis and what it's supposed to represent. I think that Gail is on to something by suggesting that since the test scores are not continuous and essentially place the students into contiguous groups (bins or buckets) there might be a better or different way to display them. My first instinct - which may be woefully wrong! - is to explore a box-and-whisker plot.

If I understand the graph correctly, it's trying to illustrate that there is a weak correlation between test score and grade. Therefore it would be advantageous to have all of the students use the same origin on the y-axis so that relationship is easy to see. Right now, that relationship is difficult to see because the graph/origin creeps upward for each group of students so comparing groups is challenging. By shifting the graph ever upward, it makes it appear as if there is a much stronger correlation than exists because it makes this appear as if it's a scatterplot when it's not. That is particularly problematic for me because it's difficult to marry my initial impression of "high correlations illustrated by scatterplots" with the low predictive (R-squared values) of each model.
Robert posted on 3/13/2014 2:00 PM
This is an interesting analysis. I also looked at the full report graphs. I would have suspected the results I see. There is a much stronger relationship between the math measures and course grades than there is between the test measures and English grades. I am not quite sure why the ACT composite score was use for math versus the math exam. We have found both the COMPASS Math tests and the ACT math tests are useful, and that for English the addition of reading improves placement. However, because English writing course are what they are the relationships between test scores and grades will be weaker. We also find value in using B grades. Good work.